00:00:00.000 Started by upstream project "autotest-per-patch" build number 132426 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.038 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.054 Using shallow fetch with depth 1 00:00:00.054 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.054 > git --version # timeout=10 00:00:00.069 > git --version # 'git version 2.39.2' 00:00:00.069 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.097 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.097 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.336 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.350 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.363 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.363 > git config core.sparsecheckout # timeout=10 00:00:02.383 > git read-tree -mu HEAD # timeout=10 00:00:02.401 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.426 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.427 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.539 [Pipeline] Start of Pipeline 00:00:02.559 [Pipeline] library 00:00:02.562 Loading library shm_lib@master 00:00:02.562 Library shm_lib@master is cached. Copying from home. 00:00:02.584 [Pipeline] node 00:00:02.593 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.598 [Pipeline] { 00:00:02.613 [Pipeline] catchError 00:00:02.615 [Pipeline] { 00:00:02.629 [Pipeline] wrap 00:00:02.640 [Pipeline] { 00:00:02.651 [Pipeline] stage 00:00:02.654 [Pipeline] { (Prologue) 00:00:02.673 [Pipeline] echo 00:00:02.674 Node: VM-host-SM17 00:00:02.679 [Pipeline] cleanWs 00:00:02.687 [WS-CLEANUP] Deleting project workspace... 00:00:02.687 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.692 [WS-CLEANUP] done 00:00:02.875 [Pipeline] setCustomBuildProperty 00:00:02.977 [Pipeline] httpRequest 00:00:03.383 [Pipeline] echo 00:00:03.385 Sorcerer 10.211.164.101 is alive 00:00:03.396 [Pipeline] retry 00:00:03.399 [Pipeline] { 00:00:03.411 [Pipeline] httpRequest 00:00:03.416 HttpMethod: GET 00:00:03.417 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.417 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.419 Response Code: HTTP/1.1 200 OK 00:00:03.420 Success: Status code 200 is in the accepted range: 200,404 00:00:03.420 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.565 [Pipeline] } 00:00:03.584 [Pipeline] // retry 00:00:03.594 [Pipeline] sh 00:00:03.877 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.892 [Pipeline] httpRequest 00:00:04.255 [Pipeline] echo 00:00:04.256 Sorcerer 10.211.164.101 is alive 00:00:04.270 [Pipeline] retry 00:00:04.272 [Pipeline] { 00:00:04.287 [Pipeline] httpRequest 00:00:04.292 HttpMethod: GET 00:00:04.292 URL: http://10.211.164.101/packages/spdk_25916e30c12b1890f2f14b68ff706bbadf4e3895.tar.gz 00:00:04.292 Sending request to url: http://10.211.164.101/packages/spdk_25916e30c12b1890f2f14b68ff706bbadf4e3895.tar.gz 00:00:04.295 Response Code: HTTP/1.1 200 OK 00:00:04.295 Success: Status code 200 is in the accepted range: 200,404 00:00:04.296 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_25916e30c12b1890f2f14b68ff706bbadf4e3895.tar.gz 00:00:07.284 [Pipeline] } 00:00:07.300 [Pipeline] // retry 00:00:07.306 [Pipeline] sh 00:00:07.584 + tar --no-same-owner -xf spdk_25916e30c12b1890f2f14b68ff706bbadf4e3895.tar.gz 00:00:10.884 [Pipeline] sh 00:00:11.165 + git -C spdk log --oneline -n5 00:00:11.165 25916e30c bdevperf: Store the result of DIF type check into job structure 00:00:11.165 bd9804982 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:11.165 2e015e34f bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:11.165 aae11995f bdev/malloc: Fix unexpected DIF verification error for initial read 00:00:11.165 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:11.183 [Pipeline] writeFile 00:00:11.197 [Pipeline] sh 00:00:11.478 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:11.490 [Pipeline] sh 00:00:11.772 + cat autorun-spdk.conf 00:00:11.772 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:11.772 SPDK_RUN_ASAN=1 00:00:11.772 SPDK_RUN_UBSAN=1 00:00:11.772 SPDK_TEST_RAID=1 00:00:11.772 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:11.779 RUN_NIGHTLY=0 00:00:11.781 [Pipeline] } 00:00:11.798 [Pipeline] // stage 00:00:11.815 [Pipeline] stage 00:00:11.817 [Pipeline] { (Run VM) 00:00:11.831 [Pipeline] sh 00:00:12.111 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:12.111 + echo 'Start stage prepare_nvme.sh' 00:00:12.111 Start stage prepare_nvme.sh 00:00:12.111 + [[ -n 5 ]] 00:00:12.111 + disk_prefix=ex5 00:00:12.111 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:12.111 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:12.111 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:12.111 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:12.111 ++ SPDK_RUN_ASAN=1 00:00:12.111 ++ SPDK_RUN_UBSAN=1 00:00:12.111 ++ SPDK_TEST_RAID=1 00:00:12.111 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:12.111 ++ RUN_NIGHTLY=0 00:00:12.111 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:12.111 + nvme_files=() 00:00:12.111 + declare -A nvme_files 00:00:12.111 + backend_dir=/var/lib/libvirt/images/backends 00:00:12.111 + nvme_files['nvme.img']=5G 00:00:12.111 + nvme_files['nvme-cmb.img']=5G 00:00:12.111 + nvme_files['nvme-multi0.img']=4G 00:00:12.111 + nvme_files['nvme-multi1.img']=4G 00:00:12.111 + nvme_files['nvme-multi2.img']=4G 00:00:12.111 + nvme_files['nvme-openstack.img']=8G 00:00:12.111 + nvme_files['nvme-zns.img']=5G 00:00:12.111 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:12.111 + (( SPDK_TEST_FTL == 1 )) 00:00:12.111 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:12.111 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:12.111 + for nvme in "${!nvme_files[@]}" 00:00:12.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:12.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:12.111 + for nvme in "${!nvme_files[@]}" 00:00:12.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:12.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:12.111 + for nvme in "${!nvme_files[@]}" 00:00:12.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:12.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:12.111 + for nvme in "${!nvme_files[@]}" 00:00:12.111 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:12.111 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:12.111 + for nvme in "${!nvme_files[@]}" 00:00:12.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:12.112 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:12.112 + for nvme in "${!nvme_files[@]}" 00:00:12.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:12.112 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:12.112 + for nvme in "${!nvme_files[@]}" 00:00:12.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:12.688 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:12.688 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:12.688 + echo 'End stage prepare_nvme.sh' 00:00:12.688 End stage prepare_nvme.sh 00:00:12.700 [Pipeline] sh 00:00:12.982 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:12.982 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:12.982 00:00:12.982 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:12.982 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:12.982 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:12.982 HELP=0 00:00:12.982 DRY_RUN=0 00:00:12.982 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:12.982 NVME_DISKS_TYPE=nvme,nvme, 00:00:12.982 NVME_AUTO_CREATE=0 00:00:12.982 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:12.982 NVME_CMB=,, 00:00:12.982 NVME_PMR=,, 00:00:12.982 NVME_ZNS=,, 00:00:12.982 NVME_MS=,, 00:00:12.982 NVME_FDP=,, 00:00:12.982 SPDK_VAGRANT_DISTRO=fedora39 00:00:12.982 SPDK_VAGRANT_VMCPU=10 00:00:12.982 SPDK_VAGRANT_VMRAM=12288 00:00:12.982 SPDK_VAGRANT_PROVIDER=libvirt 00:00:12.982 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:12.982 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:12.982 SPDK_OPENSTACK_NETWORK=0 00:00:12.982 VAGRANT_PACKAGE_BOX=0 00:00:12.982 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:12.982 FORCE_DISTRO=true 00:00:12.982 VAGRANT_BOX_VERSION= 00:00:12.982 EXTRA_VAGRANTFILES= 00:00:12.982 NIC_MODEL=e1000 00:00:12.982 00:00:12.982 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:12.982 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:16.268 Bringing machine 'default' up with 'libvirt' provider... 00:00:16.527 ==> default: Creating image (snapshot of base box volume). 00:00:16.527 ==> default: Creating domain with the following settings... 00:00:16.527 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732121560_f7ffae028f6e88926465 00:00:16.527 ==> default: -- Domain type: kvm 00:00:16.527 ==> default: -- Cpus: 10 00:00:16.527 ==> default: -- Feature: acpi 00:00:16.527 ==> default: -- Feature: apic 00:00:16.527 ==> default: -- Feature: pae 00:00:16.527 ==> default: -- Memory: 12288M 00:00:16.527 ==> default: -- Memory Backing: hugepages: 00:00:16.527 ==> default: -- Management MAC: 00:00:16.527 ==> default: -- Loader: 00:00:16.527 ==> default: -- Nvram: 00:00:16.527 ==> default: -- Base box: spdk/fedora39 00:00:16.527 ==> default: -- Storage pool: default 00:00:16.527 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732121560_f7ffae028f6e88926465.img (20G) 00:00:16.527 ==> default: -- Volume Cache: default 00:00:16.527 ==> default: -- Kernel: 00:00:16.527 ==> default: -- Initrd: 00:00:16.527 ==> default: -- Graphics Type: vnc 00:00:16.527 ==> default: -- Graphics Port: -1 00:00:16.527 ==> default: -- Graphics IP: 127.0.0.1 00:00:16.527 ==> default: -- Graphics Password: Not defined 00:00:16.527 ==> default: -- Video Type: cirrus 00:00:16.527 ==> default: -- Video VRAM: 9216 00:00:16.527 ==> default: -- Sound Type: 00:00:16.527 ==> default: -- Keymap: en-us 00:00:16.527 ==> default: -- TPM Path: 00:00:16.527 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:16.527 ==> default: -- Command line args: 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:16.527 ==> default: -> value=-drive, 00:00:16.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:16.527 ==> default: -> value=-drive, 00:00:16.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:16.527 ==> default: -> value=-drive, 00:00:16.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:16.527 ==> default: -> value=-drive, 00:00:16.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:16.527 ==> default: -> value=-device, 00:00:16.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:16.786 ==> default: Creating shared folders metadata... 00:00:16.787 ==> default: Starting domain. 00:00:18.175 ==> default: Waiting for domain to get an IP address... 00:00:36.260 ==> default: Waiting for SSH to become available... 00:00:36.260 ==> default: Configuring and enabling network interfaces... 00:00:38.163 default: SSH address: 192.168.121.246:22 00:00:38.163 default: SSH username: vagrant 00:00:38.163 default: SSH auth method: private key 00:00:40.697 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:48.813 ==> default: Mounting SSHFS shared folder... 00:00:49.383 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:00:49.383 ==> default: Checking Mount.. 00:00:50.799 ==> default: Folder Successfully Mounted! 00:00:50.799 ==> default: Running provisioner: file... 00:00:51.368 default: ~/.gitconfig => .gitconfig 00:00:51.940 00:00:51.940 SUCCESS! 00:00:51.940 00:00:51.940 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:00:51.940 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:00:51.940 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:00:51.940 00:00:51.949 [Pipeline] } 00:00:51.966 [Pipeline] // stage 00:00:51.974 [Pipeline] dir 00:00:51.975 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:00:51.976 [Pipeline] { 00:00:51.989 [Pipeline] catchError 00:00:51.992 [Pipeline] { 00:00:52.005 [Pipeline] sh 00:00:52.286 + vagrant ssh-config --host vagrant 00:00:52.286 + sed -ne /^Host/,$p 00:00:52.286 + tee ssh_conf 00:00:56.487 Host vagrant 00:00:56.487 HostName 192.168.121.246 00:00:56.487 User vagrant 00:00:56.487 Port 22 00:00:56.487 UserKnownHostsFile /dev/null 00:00:56.487 StrictHostKeyChecking no 00:00:56.487 PasswordAuthentication no 00:00:56.487 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:00:56.487 IdentitiesOnly yes 00:00:56.487 LogLevel FATAL 00:00:56.487 ForwardAgent yes 00:00:56.487 ForwardX11 yes 00:00:56.487 00:00:56.502 [Pipeline] withEnv 00:00:56.504 [Pipeline] { 00:00:56.523 [Pipeline] sh 00:00:56.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:00:56.801 source /etc/os-release 00:00:56.801 [[ -e /image.version ]] && img=$(< /image.version) 00:00:56.801 # Minimal, systemd-like check. 00:00:56.801 if [[ -e /.dockerenv ]]; then 00:00:56.801 # Clear garbage from the node's name: 00:00:56.801 # agt-er_autotest_547-896 -> autotest_547-896 00:00:56.801 # $HOSTNAME is the actual container id 00:00:56.801 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:00:56.801 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:00:56.801 # We can assume this is a mount from a host where container is running, 00:00:56.801 # so fetch its hostname to easily identify the target swarm worker. 00:00:56.801 container="$(< /etc/hostname) ($agent)" 00:00:56.801 else 00:00:56.801 # Fallback 00:00:56.801 container=$agent 00:00:56.801 fi 00:00:56.801 fi 00:00:56.801 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:00:56.801 00:00:57.071 [Pipeline] } 00:00:57.088 [Pipeline] // withEnv 00:00:57.096 [Pipeline] setCustomBuildProperty 00:00:57.111 [Pipeline] stage 00:00:57.114 [Pipeline] { (Tests) 00:00:57.131 [Pipeline] sh 00:00:57.411 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:00:57.684 [Pipeline] sh 00:00:57.963 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:00:58.236 [Pipeline] timeout 00:00:58.237 Timeout set to expire in 1 hr 30 min 00:00:58.239 [Pipeline] { 00:00:58.253 [Pipeline] sh 00:00:58.531 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:00:59.103 HEAD is now at 25916e30c bdevperf: Store the result of DIF type check into job structure 00:00:59.116 [Pipeline] sh 00:00:59.394 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:00:59.667 [Pipeline] sh 00:00:59.947 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:00.222 [Pipeline] sh 00:01:00.503 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:00.762 ++ readlink -f spdk_repo 00:01:00.762 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:00.762 + [[ -n /home/vagrant/spdk_repo ]] 00:01:00.762 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:00.762 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:00.762 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:00.762 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:00.762 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:00.762 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:00.762 + cd /home/vagrant/spdk_repo 00:01:00.762 + source /etc/os-release 00:01:00.762 ++ NAME='Fedora Linux' 00:01:00.762 ++ VERSION='39 (Cloud Edition)' 00:01:00.762 ++ ID=fedora 00:01:00.762 ++ VERSION_ID=39 00:01:00.762 ++ VERSION_CODENAME= 00:01:00.762 ++ PLATFORM_ID=platform:f39 00:01:00.762 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:00.762 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.762 ++ LOGO=fedora-logo-icon 00:01:00.762 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:00.762 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.762 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:00.763 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.763 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.763 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.763 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:00.763 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.763 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:00.763 ++ SUPPORT_END=2024-11-12 00:01:00.763 ++ VARIANT='Cloud Edition' 00:01:00.763 ++ VARIANT_ID=cloud 00:01:00.763 + uname -a 00:01:00.763 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:00.763 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:01.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:01.022 Hugepages 00:01:01.022 node hugesize free / total 00:01:01.022 node0 1048576kB 0 / 0 00:01:01.022 node0 2048kB 0 / 0 00:01:01.022 00:01:01.022 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.022 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:01.281 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:01.281 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:01.281 + rm -f /tmp/spdk-ld-path 00:01:01.281 + source autorun-spdk.conf 00:01:01.281 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.281 ++ SPDK_RUN_ASAN=1 00:01:01.281 ++ SPDK_RUN_UBSAN=1 00:01:01.281 ++ SPDK_TEST_RAID=1 00:01:01.281 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.281 ++ RUN_NIGHTLY=0 00:01:01.281 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.281 + [[ -n '' ]] 00:01:01.281 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:01.281 + for M in /var/spdk/build-*-manifest.txt 00:01:01.281 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.281 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:01.281 + for M in /var/spdk/build-*-manifest.txt 00:01:01.281 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.281 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:01.281 + for M in /var/spdk/build-*-manifest.txt 00:01:01.281 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.281 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:01.281 ++ uname 00:01:01.281 + [[ Linux == \L\i\n\u\x ]] 00:01:01.281 + sudo dmesg -T 00:01:01.281 + sudo dmesg --clear 00:01:01.281 + dmesg_pid=5204 00:01:01.281 + sudo dmesg -Tw 00:01:01.281 + [[ Fedora Linux == FreeBSD ]] 00:01:01.281 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.281 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.281 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.281 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.281 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.281 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.281 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.281 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.281 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.281 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.281 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.281 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.281 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.281 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.281 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:01.281 16:53:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.281 16:53:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.281 16:53:25 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:01.281 16:53:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.281 16:53:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:01.539 16:53:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.539 16:53:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:01.539 16:53:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.539 16:53:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.539 16:53:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.539 16:53:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.539 16:53:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.540 16:53:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.540 16:53:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.540 16:53:25 -- paths/export.sh@5 -- $ export PATH 00:01:01.540 16:53:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.540 16:53:25 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:01.540 16:53:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.540 16:53:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732121605.XXXXXX 00:01:01.540 16:53:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732121605.Hq4VGt 00:01:01.540 16:53:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.540 16:53:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.540 16:53:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:01.540 16:53:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:01.540 16:53:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.540 16:53:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.540 16:53:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.540 16:53:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.540 16:53:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:01.540 16:53:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.540 16:53:25 -- pm/common@17 -- $ local monitor 00:01:01.540 16:53:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.540 16:53:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.540 16:53:25 -- pm/common@25 -- $ sleep 1 00:01:01.540 16:53:25 -- pm/common@21 -- $ date +%s 00:01:01.540 16:53:25 -- pm/common@21 -- $ date +%s 00:01:01.540 16:53:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732121605 00:01:01.540 16:53:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732121605 00:01:01.540 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732121605_collect-vmstat.pm.log 00:01:01.540 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732121605_collect-cpu-load.pm.log 00:01:02.476 16:53:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:02.476 16:53:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.476 16:53:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.476 16:53:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:02.476 16:53:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.476 Wed Nov 20 04:53:26 PM UTC 2024 00:01:02.476 16:53:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.476 v25.01-pre-237-g25916e30c 00:01:02.477 16:53:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:02.477 16:53:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:02.477 16:53:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.477 16:53:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.477 16:53:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.477 ************************************ 00:01:02.477 START TEST asan 00:01:02.477 ************************************ 00:01:02.477 using asan 00:01:02.477 16:53:26 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:02.477 00:01:02.477 real 0m0.000s 00:01:02.477 user 0m0.000s 00:01:02.477 sys 0m0.000s 00:01:02.477 16:53:26 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.477 16:53:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.477 ************************************ 00:01:02.477 END TEST asan 00:01:02.477 ************************************ 00:01:02.477 16:53:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.477 16:53:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.477 16:53:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.477 16:53:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.477 16:53:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.477 ************************************ 00:01:02.477 START TEST ubsan 00:01:02.477 ************************************ 00:01:02.477 using ubsan 00:01:02.477 16:53:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:02.477 00:01:02.477 real 0m0.000s 00:01:02.477 user 0m0.000s 00:01:02.477 sys 0m0.000s 00:01:02.477 16:53:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.477 16:53:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.477 ************************************ 00:01:02.477 END TEST ubsan 00:01:02.477 ************************************ 00:01:02.477 16:53:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.477 16:53:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.477 16:53:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.477 16:53:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:02.735 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:02.735 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:02.995 Using 'verbs' RDMA provider 00:01:18.812 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:28.787 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:29.305 Creating mk/config.mk...done. 00:01:29.305 Creating mk/cc.flags.mk...done. 00:01:29.305 Type 'make' to build. 00:01:29.305 16:53:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:29.305 16:53:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.305 16:53:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.305 16:53:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.305 ************************************ 00:01:29.305 START TEST make 00:01:29.305 ************************************ 00:01:29.305 16:53:53 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:29.871 make[1]: Nothing to be done for 'all'. 00:01:42.101 The Meson build system 00:01:42.101 Version: 1.5.0 00:01:42.101 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:42.101 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:42.101 Build type: native build 00:01:42.101 Program cat found: YES (/usr/bin/cat) 00:01:42.101 Project name: DPDK 00:01:42.101 Project version: 24.03.0 00:01:42.101 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:42.101 C linker for the host machine: cc ld.bfd 2.40-14 00:01:42.101 Host machine cpu family: x86_64 00:01:42.101 Host machine cpu: x86_64 00:01:42.101 Message: ## Building in Developer Mode ## 00:01:42.101 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:42.101 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:42.101 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:42.101 Program python3 found: YES (/usr/bin/python3) 00:01:42.101 Program cat found: YES (/usr/bin/cat) 00:01:42.101 Compiler for C supports arguments -march=native: YES 00:01:42.101 Checking for size of "void *" : 8 00:01:42.101 Checking for size of "void *" : 8 (cached) 00:01:42.101 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:42.101 Library m found: YES 00:01:42.101 Library numa found: YES 00:01:42.101 Has header "numaif.h" : YES 00:01:42.101 Library fdt found: NO 00:01:42.101 Library execinfo found: NO 00:01:42.101 Has header "execinfo.h" : YES 00:01:42.101 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:42.101 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:42.101 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:42.101 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:42.101 Run-time dependency openssl found: YES 3.1.1 00:01:42.101 Run-time dependency libpcap found: YES 1.10.4 00:01:42.101 Has header "pcap.h" with dependency libpcap: YES 00:01:42.101 Compiler for C supports arguments -Wcast-qual: YES 00:01:42.101 Compiler for C supports arguments -Wdeprecated: YES 00:01:42.101 Compiler for C supports arguments -Wformat: YES 00:01:42.101 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:42.101 Compiler for C supports arguments -Wformat-security: NO 00:01:42.101 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.101 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:42.101 Compiler for C supports arguments -Wnested-externs: YES 00:01:42.101 Compiler for C supports arguments -Wold-style-definition: YES 00:01:42.101 Compiler for C supports arguments -Wpointer-arith: YES 00:01:42.101 Compiler for C supports arguments -Wsign-compare: YES 00:01:42.101 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:42.101 Compiler for C supports arguments -Wundef: YES 00:01:42.101 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.101 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:42.101 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:42.101 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.101 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:42.101 Program objdump found: YES (/usr/bin/objdump) 00:01:42.101 Compiler for C supports arguments -mavx512f: YES 00:01:42.101 Checking if "AVX512 checking" compiles: YES 00:01:42.101 Fetching value of define "__SSE4_2__" : 1 00:01:42.101 Fetching value of define "__AES__" : 1 00:01:42.101 Fetching value of define "__AVX__" : 1 00:01:42.101 Fetching value of define "__AVX2__" : 1 00:01:42.101 Fetching value of define "__AVX512BW__" : (undefined) 00:01:42.101 Fetching value of define "__AVX512CD__" : (undefined) 00:01:42.101 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:42.101 Fetching value of define "__AVX512F__" : (undefined) 00:01:42.101 Fetching value of define "__AVX512VL__" : (undefined) 00:01:42.101 Fetching value of define "__PCLMUL__" : 1 00:01:42.101 Fetching value of define "__RDRND__" : 1 00:01:42.101 Fetching value of define "__RDSEED__" : 1 00:01:42.101 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:42.101 Fetching value of define "__znver1__" : (undefined) 00:01:42.101 Fetching value of define "__znver2__" : (undefined) 00:01:42.101 Fetching value of define "__znver3__" : (undefined) 00:01:42.101 Fetching value of define "__znver4__" : (undefined) 00:01:42.101 Library asan found: YES 00:01:42.101 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:42.101 Message: lib/log: Defining dependency "log" 00:01:42.101 Message: lib/kvargs: Defining dependency "kvargs" 00:01:42.101 Message: lib/telemetry: Defining dependency "telemetry" 00:01:42.101 Library rt found: YES 00:01:42.101 Checking for function "getentropy" : NO 00:01:42.101 Message: lib/eal: Defining dependency "eal" 00:01:42.101 Message: lib/ring: Defining dependency "ring" 00:01:42.101 Message: lib/rcu: Defining dependency "rcu" 00:01:42.101 Message: lib/mempool: Defining dependency "mempool" 00:01:42.101 Message: lib/mbuf: Defining dependency "mbuf" 00:01:42.101 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:42.101 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:42.101 Compiler for C supports arguments -mpclmul: YES 00:01:42.101 Compiler for C supports arguments -maes: YES 00:01:42.101 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.101 Compiler for C supports arguments -mavx512bw: YES 00:01:42.101 Compiler for C supports arguments -mavx512dq: YES 00:01:42.101 Compiler for C supports arguments -mavx512vl: YES 00:01:42.101 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:42.101 Compiler for C supports arguments -mavx2: YES 00:01:42.101 Compiler for C supports arguments -mavx: YES 00:01:42.101 Message: lib/net: Defining dependency "net" 00:01:42.101 Message: lib/meter: Defining dependency "meter" 00:01:42.102 Message: lib/ethdev: Defining dependency "ethdev" 00:01:42.102 Message: lib/pci: Defining dependency "pci" 00:01:42.102 Message: lib/cmdline: Defining dependency "cmdline" 00:01:42.102 Message: lib/hash: Defining dependency "hash" 00:01:42.102 Message: lib/timer: Defining dependency "timer" 00:01:42.102 Message: lib/compressdev: Defining dependency "compressdev" 00:01:42.102 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:42.102 Message: lib/dmadev: Defining dependency "dmadev" 00:01:42.102 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:42.102 Message: lib/power: Defining dependency "power" 00:01:42.102 Message: lib/reorder: Defining dependency "reorder" 00:01:42.102 Message: lib/security: Defining dependency "security" 00:01:42.102 Has header "linux/userfaultfd.h" : YES 00:01:42.102 Has header "linux/vduse.h" : YES 00:01:42.102 Message: lib/vhost: Defining dependency "vhost" 00:01:42.102 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.102 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.102 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.102 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.102 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.102 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.102 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.102 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.102 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.102 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.102 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:42.102 Configuring doxy-api-html.conf using configuration 00:01:42.102 Configuring doxy-api-man.conf using configuration 00:01:42.102 Program mandb found: YES (/usr/bin/mandb) 00:01:42.102 Program sphinx-build found: NO 00:01:42.102 Configuring rte_build_config.h using configuration 00:01:42.102 Message: 00:01:42.102 ================= 00:01:42.102 Applications Enabled 00:01:42.102 ================= 00:01:42.102 00:01:42.102 apps: 00:01:42.102 00:01:42.102 00:01:42.102 Message: 00:01:42.102 ================= 00:01:42.102 Libraries Enabled 00:01:42.102 ================= 00:01:42.102 00:01:42.102 libs: 00:01:42.102 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.102 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.102 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.102 00:01:42.102 Message: 00:01:42.102 =============== 00:01:42.102 Drivers Enabled 00:01:42.102 =============== 00:01:42.102 00:01:42.102 common: 00:01:42.102 00:01:42.102 bus: 00:01:42.102 pci, vdev, 00:01:42.102 mempool: 00:01:42.102 ring, 00:01:42.102 dma: 00:01:42.102 00:01:42.102 net: 00:01:42.102 00:01:42.102 crypto: 00:01:42.102 00:01:42.102 compress: 00:01:42.102 00:01:42.102 vdpa: 00:01:42.102 00:01:42.102 00:01:42.102 Message: 00:01:42.102 ================= 00:01:42.102 Content Skipped 00:01:42.102 ================= 00:01:42.102 00:01:42.102 apps: 00:01:42.102 dumpcap: explicitly disabled via build config 00:01:42.102 graph: explicitly disabled via build config 00:01:42.102 pdump: explicitly disabled via build config 00:01:42.102 proc-info: explicitly disabled via build config 00:01:42.102 test-acl: explicitly disabled via build config 00:01:42.102 test-bbdev: explicitly disabled via build config 00:01:42.102 test-cmdline: explicitly disabled via build config 00:01:42.102 test-compress-perf: explicitly disabled via build config 00:01:42.102 test-crypto-perf: explicitly disabled via build config 00:01:42.102 test-dma-perf: explicitly disabled via build config 00:01:42.102 test-eventdev: explicitly disabled via build config 00:01:42.102 test-fib: explicitly disabled via build config 00:01:42.102 test-flow-perf: explicitly disabled via build config 00:01:42.102 test-gpudev: explicitly disabled via build config 00:01:42.102 test-mldev: explicitly disabled via build config 00:01:42.102 test-pipeline: explicitly disabled via build config 00:01:42.102 test-pmd: explicitly disabled via build config 00:01:42.102 test-regex: explicitly disabled via build config 00:01:42.102 test-sad: explicitly disabled via build config 00:01:42.102 test-security-perf: explicitly disabled via build config 00:01:42.102 00:01:42.102 libs: 00:01:42.102 argparse: explicitly disabled via build config 00:01:42.102 metrics: explicitly disabled via build config 00:01:42.102 acl: explicitly disabled via build config 00:01:42.102 bbdev: explicitly disabled via build config 00:01:42.102 bitratestats: explicitly disabled via build config 00:01:42.102 bpf: explicitly disabled via build config 00:01:42.102 cfgfile: explicitly disabled via build config 00:01:42.102 distributor: explicitly disabled via build config 00:01:42.102 efd: explicitly disabled via build config 00:01:42.102 eventdev: explicitly disabled via build config 00:01:42.102 dispatcher: explicitly disabled via build config 00:01:42.102 gpudev: explicitly disabled via build config 00:01:42.102 gro: explicitly disabled via build config 00:01:42.102 gso: explicitly disabled via build config 00:01:42.102 ip_frag: explicitly disabled via build config 00:01:42.102 jobstats: explicitly disabled via build config 00:01:42.102 latencystats: explicitly disabled via build config 00:01:42.102 lpm: explicitly disabled via build config 00:01:42.102 member: explicitly disabled via build config 00:01:42.102 pcapng: explicitly disabled via build config 00:01:42.102 rawdev: explicitly disabled via build config 00:01:42.102 regexdev: explicitly disabled via build config 00:01:42.102 mldev: explicitly disabled via build config 00:01:42.102 rib: explicitly disabled via build config 00:01:42.102 sched: explicitly disabled via build config 00:01:42.102 stack: explicitly disabled via build config 00:01:42.102 ipsec: explicitly disabled via build config 00:01:42.102 pdcp: explicitly disabled via build config 00:01:42.102 fib: explicitly disabled via build config 00:01:42.102 port: explicitly disabled via build config 00:01:42.102 pdump: explicitly disabled via build config 00:01:42.102 table: explicitly disabled via build config 00:01:42.102 pipeline: explicitly disabled via build config 00:01:42.102 graph: explicitly disabled via build config 00:01:42.102 node: explicitly disabled via build config 00:01:42.102 00:01:42.102 drivers: 00:01:42.102 common/cpt: not in enabled drivers build config 00:01:42.102 common/dpaax: not in enabled drivers build config 00:01:42.102 common/iavf: not in enabled drivers build config 00:01:42.102 common/idpf: not in enabled drivers build config 00:01:42.102 common/ionic: not in enabled drivers build config 00:01:42.102 common/mvep: not in enabled drivers build config 00:01:42.102 common/octeontx: not in enabled drivers build config 00:01:42.102 bus/auxiliary: not in enabled drivers build config 00:01:42.102 bus/cdx: not in enabled drivers build config 00:01:42.102 bus/dpaa: not in enabled drivers build config 00:01:42.102 bus/fslmc: not in enabled drivers build config 00:01:42.102 bus/ifpga: not in enabled drivers build config 00:01:42.102 bus/platform: not in enabled drivers build config 00:01:42.102 bus/uacce: not in enabled drivers build config 00:01:42.102 bus/vmbus: not in enabled drivers build config 00:01:42.102 common/cnxk: not in enabled drivers build config 00:01:42.102 common/mlx5: not in enabled drivers build config 00:01:42.102 common/nfp: not in enabled drivers build config 00:01:42.102 common/nitrox: not in enabled drivers build config 00:01:42.102 common/qat: not in enabled drivers build config 00:01:42.102 common/sfc_efx: not in enabled drivers build config 00:01:42.102 mempool/bucket: not in enabled drivers build config 00:01:42.102 mempool/cnxk: not in enabled drivers build config 00:01:42.102 mempool/dpaa: not in enabled drivers build config 00:01:42.102 mempool/dpaa2: not in enabled drivers build config 00:01:42.102 mempool/octeontx: not in enabled drivers build config 00:01:42.102 mempool/stack: not in enabled drivers build config 00:01:42.102 dma/cnxk: not in enabled drivers build config 00:01:42.102 dma/dpaa: not in enabled drivers build config 00:01:42.102 dma/dpaa2: not in enabled drivers build config 00:01:42.102 dma/hisilicon: not in enabled drivers build config 00:01:42.102 dma/idxd: not in enabled drivers build config 00:01:42.102 dma/ioat: not in enabled drivers build config 00:01:42.102 dma/skeleton: not in enabled drivers build config 00:01:42.102 net/af_packet: not in enabled drivers build config 00:01:42.102 net/af_xdp: not in enabled drivers build config 00:01:42.102 net/ark: not in enabled drivers build config 00:01:42.102 net/atlantic: not in enabled drivers build config 00:01:42.102 net/avp: not in enabled drivers build config 00:01:42.102 net/axgbe: not in enabled drivers build config 00:01:42.102 net/bnx2x: not in enabled drivers build config 00:01:42.102 net/bnxt: not in enabled drivers build config 00:01:42.102 net/bonding: not in enabled drivers build config 00:01:42.102 net/cnxk: not in enabled drivers build config 00:01:42.102 net/cpfl: not in enabled drivers build config 00:01:42.102 net/cxgbe: not in enabled drivers build config 00:01:42.102 net/dpaa: not in enabled drivers build config 00:01:42.102 net/dpaa2: not in enabled drivers build config 00:01:42.102 net/e1000: not in enabled drivers build config 00:01:42.102 net/ena: not in enabled drivers build config 00:01:42.102 net/enetc: not in enabled drivers build config 00:01:42.102 net/enetfec: not in enabled drivers build config 00:01:42.102 net/enic: not in enabled drivers build config 00:01:42.102 net/failsafe: not in enabled drivers build config 00:01:42.102 net/fm10k: not in enabled drivers build config 00:01:42.102 net/gve: not in enabled drivers build config 00:01:42.102 net/hinic: not in enabled drivers build config 00:01:42.102 net/hns3: not in enabled drivers build config 00:01:42.102 net/i40e: not in enabled drivers build config 00:01:42.102 net/iavf: not in enabled drivers build config 00:01:42.102 net/ice: not in enabled drivers build config 00:01:42.102 net/idpf: not in enabled drivers build config 00:01:42.102 net/igc: not in enabled drivers build config 00:01:42.102 net/ionic: not in enabled drivers build config 00:01:42.102 net/ipn3ke: not in enabled drivers build config 00:01:42.102 net/ixgbe: not in enabled drivers build config 00:01:42.102 net/mana: not in enabled drivers build config 00:01:42.102 net/memif: not in enabled drivers build config 00:01:42.102 net/mlx4: not in enabled drivers build config 00:01:42.103 net/mlx5: not in enabled drivers build config 00:01:42.103 net/mvneta: not in enabled drivers build config 00:01:42.103 net/mvpp2: not in enabled drivers build config 00:01:42.103 net/netvsc: not in enabled drivers build config 00:01:42.103 net/nfb: not in enabled drivers build config 00:01:42.103 net/nfp: not in enabled drivers build config 00:01:42.103 net/ngbe: not in enabled drivers build config 00:01:42.103 net/null: not in enabled drivers build config 00:01:42.103 net/octeontx: not in enabled drivers build config 00:01:42.103 net/octeon_ep: not in enabled drivers build config 00:01:42.103 net/pcap: not in enabled drivers build config 00:01:42.103 net/pfe: not in enabled drivers build config 00:01:42.103 net/qede: not in enabled drivers build config 00:01:42.103 net/ring: not in enabled drivers build config 00:01:42.103 net/sfc: not in enabled drivers build config 00:01:42.103 net/softnic: not in enabled drivers build config 00:01:42.103 net/tap: not in enabled drivers build config 00:01:42.103 net/thunderx: not in enabled drivers build config 00:01:42.103 net/txgbe: not in enabled drivers build config 00:01:42.103 net/vdev_netvsc: not in enabled drivers build config 00:01:42.103 net/vhost: not in enabled drivers build config 00:01:42.103 net/virtio: not in enabled drivers build config 00:01:42.103 net/vmxnet3: not in enabled drivers build config 00:01:42.103 raw/*: missing internal dependency, "rawdev" 00:01:42.103 crypto/armv8: not in enabled drivers build config 00:01:42.103 crypto/bcmfs: not in enabled drivers build config 00:01:42.103 crypto/caam_jr: not in enabled drivers build config 00:01:42.103 crypto/ccp: not in enabled drivers build config 00:01:42.103 crypto/cnxk: not in enabled drivers build config 00:01:42.103 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.103 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.103 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.103 crypto/mlx5: not in enabled drivers build config 00:01:42.103 crypto/mvsam: not in enabled drivers build config 00:01:42.103 crypto/nitrox: not in enabled drivers build config 00:01:42.103 crypto/null: not in enabled drivers build config 00:01:42.103 crypto/octeontx: not in enabled drivers build config 00:01:42.103 crypto/openssl: not in enabled drivers build config 00:01:42.103 crypto/scheduler: not in enabled drivers build config 00:01:42.103 crypto/uadk: not in enabled drivers build config 00:01:42.103 crypto/virtio: not in enabled drivers build config 00:01:42.103 compress/isal: not in enabled drivers build config 00:01:42.103 compress/mlx5: not in enabled drivers build config 00:01:42.103 compress/nitrox: not in enabled drivers build config 00:01:42.103 compress/octeontx: not in enabled drivers build config 00:01:42.103 compress/zlib: not in enabled drivers build config 00:01:42.103 regex/*: missing internal dependency, "regexdev" 00:01:42.103 ml/*: missing internal dependency, "mldev" 00:01:42.103 vdpa/ifc: not in enabled drivers build config 00:01:42.103 vdpa/mlx5: not in enabled drivers build config 00:01:42.103 vdpa/nfp: not in enabled drivers build config 00:01:42.103 vdpa/sfc: not in enabled drivers build config 00:01:42.103 event/*: missing internal dependency, "eventdev" 00:01:42.103 baseband/*: missing internal dependency, "bbdev" 00:01:42.103 gpu/*: missing internal dependency, "gpudev" 00:01:42.103 00:01:42.103 00:01:42.103 Build targets in project: 85 00:01:42.103 00:01:42.103 DPDK 24.03.0 00:01:42.103 00:01:42.103 User defined options 00:01:42.103 buildtype : debug 00:01:42.103 default_library : shared 00:01:42.103 libdir : lib 00:01:42.103 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:42.103 b_sanitize : address 00:01:42.103 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:42.103 c_link_args : 00:01:42.103 cpu_instruction_set: native 00:01:42.103 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:42.103 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:42.103 enable_docs : false 00:01:42.103 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:42.103 enable_kmods : false 00:01:42.103 max_lcores : 128 00:01:42.103 tests : false 00:01:42.103 00:01:42.103 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.361 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:42.648 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:42.648 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.648 [3/268] Linking static target lib/librte_kvargs.a 00:01:42.648 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:42.648 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:42.648 [6/268] Linking static target lib/librte_log.a 00:01:43.214 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.214 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.214 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:43.214 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:43.470 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.470 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:43.470 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:43.470 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.470 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:43.728 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.728 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.728 [18/268] Linking static target lib/librte_telemetry.a 00:01:43.728 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.728 [20/268] Linking target lib/librte_log.so.24.1 00:01:43.986 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:43.986 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.244 [23/268] Linking target lib/librte_kvargs.so.24.1 00:01:44.244 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.244 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.244 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.244 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.501 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.501 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.501 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.501 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.501 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.501 [33/268] Linking target lib/librte_telemetry.so.24.1 00:01:44.501 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.759 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:44.759 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.017 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.275 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.275 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.275 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.275 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.275 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.275 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.533 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.533 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.790 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.790 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.790 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.790 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.049 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.307 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.307 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.307 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.565 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.565 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.565 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.565 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.824 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.824 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.824 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.824 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.082 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.082 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.341 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.341 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.341 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.598 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.598 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.598 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.856 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.856 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.856 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.856 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.856 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.856 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.114 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.114 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.114 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.372 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.372 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.372 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.630 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.630 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.630 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.630 [85/268] Linking static target lib/librte_eal.a 00:01:48.888 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.888 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.888 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.888 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.888 [90/268] Linking static target lib/librte_ring.a 00:01:48.888 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.888 [92/268] Linking static target lib/librte_mempool.a 00:01:49.454 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.454 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.454 [95/268] Linking static target lib/librte_rcu.a 00:01:49.454 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.454 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.712 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.712 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:49.712 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:49.969 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.969 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.227 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:50.227 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:50.227 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.227 [106/268] Linking static target lib/librte_mbuf.a 00:01:50.227 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:50.227 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.484 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:50.484 [110/268] Linking static target lib/librte_net.a 00:01:50.484 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.484 [112/268] Linking static target lib/librte_meter.a 00:01:50.742 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.742 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.000 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.000 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.000 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.259 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.259 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.259 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.516 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.083 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.083 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.340 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.340 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.340 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.340 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.340 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.340 [129/268] Linking static target lib/librte_pci.a 00:01:52.340 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.599 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.599 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.599 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.599 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.857 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.858 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.858 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.858 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.858 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.858 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.858 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.858 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.858 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.858 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.115 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.115 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.373 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.373 [148/268] Linking static target lib/librte_cmdline.a 00:01:53.632 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.632 [150/268] Linking static target lib/librte_ethdev.a 00:01:53.632 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.632 [152/268] Linking static target lib/librte_timer.a 00:01:53.632 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.890 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.890 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.890 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:54.152 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.411 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.411 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.411 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.670 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.670 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.670 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.670 [164/268] Linking static target lib/librte_compressdev.a 00:01:54.670 [165/268] Linking static target lib/librte_hash.a 00:01:54.929 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.929 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.929 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.187 [169/268] Linking static target lib/librte_dmadev.a 00:01:55.187 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.187 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.187 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.444 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.445 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.702 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.960 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.960 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.960 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.960 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.960 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.960 [181/268] Linking static target lib/librte_cryptodev.a 00:01:56.218 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.218 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.218 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.476 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.476 [186/268] Linking static target lib/librte_power.a 00:01:56.739 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.739 [188/268] Linking static target lib/librte_reorder.a 00:01:56.998 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.998 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.998 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.998 [192/268] Linking static target lib/librte_security.a 00:01:56.998 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.257 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.824 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.824 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.824 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.083 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:58.083 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.342 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.613 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.613 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.880 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.880 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.880 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:58.880 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.880 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.446 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.446 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.446 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.446 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.704 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.704 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.704 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.704 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.704 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.704 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.704 [218/268] Linking static target drivers/librte_bus_vdev.a 00:01:59.704 [219/268] Linking static target drivers/librte_bus_pci.a 00:01:59.704 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.704 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.963 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.963 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.963 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.963 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.963 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:00.221 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.788 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.788 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.788 [230/268] Linking target lib/librte_eal.so.24.1 00:02:01.047 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:01.047 [232/268] Linking target lib/librte_meter.so.24.1 00:02:01.047 [233/268] Linking target lib/librte_timer.so.24.1 00:02:01.047 [234/268] Linking target lib/librte_pci.so.24.1 00:02:01.047 [235/268] Linking target lib/librte_ring.so.24.1 00:02:01.047 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:01.047 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:01.305 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:01.305 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:01.305 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:01.305 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:01.305 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:01.305 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:01.305 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:01.305 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:01.305 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:01.305 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:01.564 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:01.564 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:01.564 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:01.564 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:01.564 [252/268] Linking target lib/librte_net.so.24.1 00:02:01.564 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:01.564 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:01.822 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.822 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.822 [257/268] Linking target lib/librte_security.so.24.1 00:02:01.822 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:01.822 [259/268] Linking target lib/librte_hash.so.24.1 00:02:01.822 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.080 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:02.080 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:02.080 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:02.080 [264/268] Linking target lib/librte_power.so.24.1 00:02:05.361 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:05.361 [266/268] Linking static target lib/librte_vhost.a 00:02:06.296 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.554 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:06.554 INFO: autodetecting backend as ninja 00:02:06.554 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:28.476 CC lib/ut_mock/mock.o 00:02:28.476 CC lib/log/log_flags.o 00:02:28.476 CC lib/log/log.o 00:02:28.476 CC lib/log/log_deprecated.o 00:02:28.476 CC lib/ut/ut.o 00:02:28.476 LIB libspdk_ut_mock.a 00:02:28.476 LIB libspdk_ut.a 00:02:28.476 LIB libspdk_log.a 00:02:28.476 SO libspdk_ut_mock.so.6.0 00:02:28.476 SO libspdk_ut.so.2.0 00:02:28.476 SO libspdk_log.so.7.1 00:02:28.476 SYMLINK libspdk_ut.so 00:02:28.476 SYMLINK libspdk_ut_mock.so 00:02:28.476 SYMLINK libspdk_log.so 00:02:28.476 CXX lib/trace_parser/trace.o 00:02:28.476 CC lib/dma/dma.o 00:02:28.476 CC lib/ioat/ioat.o 00:02:28.476 CC lib/util/base64.o 00:02:28.476 CC lib/util/bit_array.o 00:02:28.476 CC lib/util/cpuset.o 00:02:28.476 CC lib/util/crc16.o 00:02:28.476 CC lib/util/crc32.o 00:02:28.476 CC lib/util/crc32c.o 00:02:28.476 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.476 CC lib/util/crc32_ieee.o 00:02:28.476 CC lib/util/crc64.o 00:02:28.476 CC lib/util/dif.o 00:02:28.476 CC lib/util/fd.o 00:02:28.476 LIB libspdk_dma.a 00:02:28.476 CC lib/vfio_user/host/vfio_user.o 00:02:28.476 CC lib/util/fd_group.o 00:02:28.476 CC lib/util/file.o 00:02:28.476 SO libspdk_dma.so.5.0 00:02:28.476 LIB libspdk_ioat.a 00:02:28.476 CC lib/util/hexlify.o 00:02:28.476 SO libspdk_ioat.so.7.0 00:02:28.476 SYMLINK libspdk_dma.so 00:02:28.476 CC lib/util/iov.o 00:02:28.476 SYMLINK libspdk_ioat.so 00:02:28.476 CC lib/util/math.o 00:02:28.476 CC lib/util/net.o 00:02:28.476 CC lib/util/pipe.o 00:02:28.476 CC lib/util/strerror_tls.o 00:02:28.476 CC lib/util/string.o 00:02:28.476 LIB libspdk_vfio_user.a 00:02:28.476 CC lib/util/uuid.o 00:02:28.476 CC lib/util/xor.o 00:02:28.476 CC lib/util/zipf.o 00:02:28.476 CC lib/util/md5.o 00:02:28.476 SO libspdk_vfio_user.so.5.0 00:02:28.476 SYMLINK libspdk_vfio_user.so 00:02:28.476 LIB libspdk_util.a 00:02:28.476 LIB libspdk_trace_parser.a 00:02:28.476 SO libspdk_util.so.10.1 00:02:28.476 SO libspdk_trace_parser.so.6.0 00:02:28.476 SYMLINK libspdk_trace_parser.so 00:02:28.476 SYMLINK libspdk_util.so 00:02:28.476 CC lib/idxd/idxd_user.o 00:02:28.476 CC lib/idxd/idxd.o 00:02:28.476 CC lib/idxd/idxd_kernel.o 00:02:28.476 CC lib/json/json_parse.o 00:02:28.476 CC lib/json/json_util.o 00:02:28.476 CC lib/vmd/vmd.o 00:02:28.476 CC lib/json/json_write.o 00:02:28.476 CC lib/env_dpdk/env.o 00:02:28.476 CC lib/rdma_utils/rdma_utils.o 00:02:28.476 CC lib/conf/conf.o 00:02:28.476 CC lib/vmd/led.o 00:02:28.476 CC lib/env_dpdk/memory.o 00:02:28.476 CC lib/env_dpdk/pci.o 00:02:28.476 CC lib/env_dpdk/init.o 00:02:28.476 LIB libspdk_conf.a 00:02:28.476 CC lib/env_dpdk/threads.o 00:02:28.476 SO libspdk_conf.so.6.0 00:02:28.476 LIB libspdk_rdma_utils.a 00:02:28.476 SO libspdk_rdma_utils.so.1.0 00:02:28.476 LIB libspdk_json.a 00:02:28.476 SYMLINK libspdk_conf.so 00:02:28.476 SO libspdk_json.so.6.0 00:02:28.476 CC lib/env_dpdk/pci_ioat.o 00:02:28.476 SYMLINK libspdk_rdma_utils.so 00:02:28.476 CC lib/env_dpdk/pci_virtio.o 00:02:28.476 CC lib/env_dpdk/pci_vmd.o 00:02:28.476 SYMLINK libspdk_json.so 00:02:28.476 CC lib/env_dpdk/pci_idxd.o 00:02:28.734 CC lib/env_dpdk/pci_event.o 00:02:28.734 CC lib/env_dpdk/sigbus_handler.o 00:02:28.734 CC lib/env_dpdk/pci_dpdk.o 00:02:28.734 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.734 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.994 LIB libspdk_idxd.a 00:02:28.994 SO libspdk_idxd.so.12.1 00:02:28.994 LIB libspdk_vmd.a 00:02:28.994 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.994 CC lib/rdma_provider/common.o 00:02:28.994 SYMLINK libspdk_idxd.so 00:02:28.994 CC lib/jsonrpc/jsonrpc_server.o 00:02:28.994 SO libspdk_vmd.so.6.0 00:02:28.994 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:28.994 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.994 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:28.994 SYMLINK libspdk_vmd.so 00:02:29.253 LIB libspdk_rdma_provider.a 00:02:29.253 SO libspdk_rdma_provider.so.7.0 00:02:29.253 LIB libspdk_jsonrpc.a 00:02:29.253 SYMLINK libspdk_rdma_provider.so 00:02:29.512 SO libspdk_jsonrpc.so.6.0 00:02:29.512 SYMLINK libspdk_jsonrpc.so 00:02:29.769 CC lib/rpc/rpc.o 00:02:30.028 LIB libspdk_env_dpdk.a 00:02:30.028 LIB libspdk_rpc.a 00:02:30.028 SO libspdk_rpc.so.6.0 00:02:30.028 SO libspdk_env_dpdk.so.15.1 00:02:30.028 SYMLINK libspdk_rpc.so 00:02:30.286 SYMLINK libspdk_env_dpdk.so 00:02:30.286 CC lib/notify/notify.o 00:02:30.286 CC lib/notify/notify_rpc.o 00:02:30.286 CC lib/trace/trace.o 00:02:30.286 CC lib/trace/trace_flags.o 00:02:30.286 CC lib/trace/trace_rpc.o 00:02:30.286 CC lib/keyring/keyring.o 00:02:30.286 CC lib/keyring/keyring_rpc.o 00:02:30.545 LIB libspdk_notify.a 00:02:30.545 SO libspdk_notify.so.6.0 00:02:30.545 LIB libspdk_keyring.a 00:02:30.804 SYMLINK libspdk_notify.so 00:02:30.804 SO libspdk_keyring.so.2.0 00:02:30.804 LIB libspdk_trace.a 00:02:30.804 SYMLINK libspdk_keyring.so 00:02:30.804 SO libspdk_trace.so.11.0 00:02:30.804 SYMLINK libspdk_trace.so 00:02:31.063 CC lib/thread/thread.o 00:02:31.063 CC lib/thread/iobuf.o 00:02:31.063 CC lib/sock/sock.o 00:02:31.063 CC lib/sock/sock_rpc.o 00:02:31.631 LIB libspdk_sock.a 00:02:31.890 SO libspdk_sock.so.10.0 00:02:31.890 SYMLINK libspdk_sock.so 00:02:32.149 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.149 CC lib/nvme/nvme_ctrlr.o 00:02:32.149 CC lib/nvme/nvme_fabric.o 00:02:32.149 CC lib/nvme/nvme_ns.o 00:02:32.149 CC lib/nvme/nvme_ns_cmd.o 00:02:32.149 CC lib/nvme/nvme_pcie_common.o 00:02:32.149 CC lib/nvme/nvme_qpair.o 00:02:32.149 CC lib/nvme/nvme_pcie.o 00:02:32.149 CC lib/nvme/nvme.o 00:02:33.085 CC lib/nvme/nvme_quirks.o 00:02:33.085 CC lib/nvme/nvme_transport.o 00:02:33.085 CC lib/nvme/nvme_discovery.o 00:02:33.085 LIB libspdk_thread.a 00:02:33.085 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.344 SO libspdk_thread.so.11.0 00:02:33.344 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.344 CC lib/nvme/nvme_tcp.o 00:02:33.344 SYMLINK libspdk_thread.so 00:02:33.344 CC lib/nvme/nvme_opal.o 00:02:33.344 CC lib/nvme/nvme_io_msg.o 00:02:33.603 CC lib/nvme/nvme_poll_group.o 00:02:33.860 CC lib/accel/accel.o 00:02:33.860 CC lib/accel/accel_rpc.o 00:02:33.860 CC lib/nvme/nvme_zns.o 00:02:33.860 CC lib/nvme/nvme_stubs.o 00:02:34.118 CC lib/nvme/nvme_auth.o 00:02:34.118 CC lib/nvme/nvme_cuse.o 00:02:34.118 CC lib/nvme/nvme_rdma.o 00:02:34.118 CC lib/accel/accel_sw.o 00:02:34.377 CC lib/blob/blobstore.o 00:02:34.377 CC lib/blob/request.o 00:02:34.377 CC lib/blob/zeroes.o 00:02:34.635 CC lib/blob/blob_bs_dev.o 00:02:34.635 CC lib/init/json_config.o 00:02:34.893 CC lib/init/subsystem.o 00:02:34.893 CC lib/init/subsystem_rpc.o 00:02:35.152 CC lib/init/rpc.o 00:02:35.152 LIB libspdk_accel.a 00:02:35.152 LIB libspdk_init.a 00:02:35.152 CC lib/virtio/virtio.o 00:02:35.152 CC lib/virtio/virtio_vhost_user.o 00:02:35.152 CC lib/virtio/virtio_vfio_user.o 00:02:35.152 CC lib/virtio/virtio_pci.o 00:02:35.152 SO libspdk_accel.so.16.0 00:02:35.152 CC lib/fsdev/fsdev_io.o 00:02:35.152 CC lib/fsdev/fsdev.o 00:02:35.411 SO libspdk_init.so.6.0 00:02:35.411 SYMLINK libspdk_accel.so 00:02:35.411 CC lib/fsdev/fsdev_rpc.o 00:02:35.411 SYMLINK libspdk_init.so 00:02:35.669 CC lib/bdev/bdev.o 00:02:35.669 CC lib/bdev/bdev_rpc.o 00:02:35.669 CC lib/bdev/bdev_zone.o 00:02:35.669 CC lib/bdev/part.o 00:02:35.669 CC lib/event/app.o 00:02:35.669 LIB libspdk_virtio.a 00:02:35.669 SO libspdk_virtio.so.7.0 00:02:35.669 CC lib/event/reactor.o 00:02:35.928 LIB libspdk_nvme.a 00:02:35.928 SYMLINK libspdk_virtio.so 00:02:35.928 CC lib/event/log_rpc.o 00:02:35.928 CC lib/bdev/scsi_nvme.o 00:02:35.928 SO libspdk_nvme.so.15.0 00:02:36.187 CC lib/event/app_rpc.o 00:02:36.187 CC lib/event/scheduler_static.o 00:02:36.187 LIB libspdk_fsdev.a 00:02:36.187 SO libspdk_fsdev.so.2.0 00:02:36.187 SYMLINK libspdk_fsdev.so 00:02:36.445 SYMLINK libspdk_nvme.so 00:02:36.445 LIB libspdk_event.a 00:02:36.445 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:36.445 SO libspdk_event.so.14.0 00:02:36.445 SYMLINK libspdk_event.so 00:02:37.380 LIB libspdk_fuse_dispatcher.a 00:02:37.380 SO libspdk_fuse_dispatcher.so.1.0 00:02:37.380 SYMLINK libspdk_fuse_dispatcher.so 00:02:38.756 LIB libspdk_blob.a 00:02:38.756 SO libspdk_blob.so.11.0 00:02:39.014 SYMLINK libspdk_blob.so 00:02:39.014 CC lib/blobfs/tree.o 00:02:39.273 CC lib/blobfs/blobfs.o 00:02:39.273 CC lib/lvol/lvol.o 00:02:39.273 LIB libspdk_bdev.a 00:02:39.531 SO libspdk_bdev.so.17.0 00:02:39.531 SYMLINK libspdk_bdev.so 00:02:39.789 CC lib/ftl/ftl_core.o 00:02:39.789 CC lib/ftl/ftl_init.o 00:02:39.789 CC lib/ftl/ftl_layout.o 00:02:39.789 CC lib/ftl/ftl_debug.o 00:02:39.789 CC lib/scsi/dev.o 00:02:39.789 CC lib/ublk/ublk.o 00:02:39.789 CC lib/nvmf/ctrlr.o 00:02:39.789 CC lib/nbd/nbd.o 00:02:40.047 CC lib/nvmf/ctrlr_discovery.o 00:02:40.047 CC lib/scsi/lun.o 00:02:40.047 CC lib/nvmf/ctrlr_bdev.o 00:02:40.305 LIB libspdk_blobfs.a 00:02:40.305 SO libspdk_blobfs.so.10.0 00:02:40.305 CC lib/ftl/ftl_io.o 00:02:40.305 CC lib/nvmf/subsystem.o 00:02:40.305 SYMLINK libspdk_blobfs.so 00:02:40.305 CC lib/nbd/nbd_rpc.o 00:02:40.563 CC lib/nvmf/nvmf.o 00:02:40.563 LIB libspdk_lvol.a 00:02:40.563 SO libspdk_lvol.so.10.0 00:02:40.563 CC lib/scsi/port.o 00:02:40.563 LIB libspdk_nbd.a 00:02:40.563 SYMLINK libspdk_lvol.so 00:02:40.563 CC lib/scsi/scsi.o 00:02:40.563 CC lib/ftl/ftl_sb.o 00:02:40.563 SO libspdk_nbd.so.7.0 00:02:40.563 CC lib/ublk/ublk_rpc.o 00:02:40.821 SYMLINK libspdk_nbd.so 00:02:40.821 CC lib/scsi/scsi_bdev.o 00:02:40.821 CC lib/scsi/scsi_pr.o 00:02:40.821 CC lib/scsi/scsi_rpc.o 00:02:40.821 CC lib/ftl/ftl_l2p.o 00:02:40.821 CC lib/ftl/ftl_l2p_flat.o 00:02:40.821 LIB libspdk_ublk.a 00:02:40.821 SO libspdk_ublk.so.3.0 00:02:40.821 CC lib/nvmf/nvmf_rpc.o 00:02:41.079 SYMLINK libspdk_ublk.so 00:02:41.079 CC lib/ftl/ftl_nv_cache.o 00:02:41.079 CC lib/nvmf/transport.o 00:02:41.079 CC lib/ftl/ftl_band.o 00:02:41.079 CC lib/scsi/task.o 00:02:41.079 CC lib/ftl/ftl_band_ops.o 00:02:41.338 CC lib/ftl/ftl_writer.o 00:02:41.338 LIB libspdk_scsi.a 00:02:41.596 SO libspdk_scsi.so.9.0 00:02:41.596 CC lib/nvmf/tcp.o 00:02:41.596 SYMLINK libspdk_scsi.so 00:02:41.596 CC lib/ftl/ftl_rq.o 00:02:41.596 CC lib/ftl/ftl_reloc.o 00:02:41.596 CC lib/nvmf/stubs.o 00:02:41.596 CC lib/ftl/ftl_l2p_cache.o 00:02:41.854 CC lib/ftl/ftl_p2l.o 00:02:41.854 CC lib/ftl/ftl_p2l_log.o 00:02:42.111 CC lib/nvmf/mdns_server.o 00:02:42.111 CC lib/iscsi/conn.o 00:02:42.111 CC lib/vhost/vhost.o 00:02:42.111 CC lib/nvmf/rdma.o 00:02:42.369 CC lib/nvmf/auth.o 00:02:42.369 CC lib/iscsi/init_grp.o 00:02:42.369 CC lib/vhost/vhost_rpc.o 00:02:42.369 CC lib/ftl/mngt/ftl_mngt.o 00:02:42.369 CC lib/iscsi/iscsi.o 00:02:42.626 CC lib/iscsi/param.o 00:02:42.626 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:42.626 CC lib/vhost/vhost_scsi.o 00:02:42.883 CC lib/iscsi/portal_grp.o 00:02:42.884 CC lib/iscsi/tgt_node.o 00:02:42.884 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.141 CC lib/vhost/vhost_blk.o 00:02:43.141 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.399 CC lib/vhost/rte_vhost_user.o 00:02:43.399 CC lib/iscsi/iscsi_subsystem.o 00:02:43.399 CC lib/iscsi/iscsi_rpc.o 00:02:43.657 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.657 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.657 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.914 CC lib/iscsi/task.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:43.914 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.172 CC lib/ftl/utils/ftl_conf.o 00:02:44.172 CC lib/ftl/utils/ftl_md.o 00:02:44.172 CC lib/ftl/utils/ftl_mempool.o 00:02:44.172 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.172 CC lib/ftl/utils/ftl_property.o 00:02:44.172 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.172 LIB libspdk_iscsi.a 00:02:44.430 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.430 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.430 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.430 SO libspdk_iscsi.so.8.0 00:02:44.430 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.430 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.430 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.689 SYMLINK libspdk_iscsi.so 00:02:44.689 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.689 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.689 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.689 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.689 LIB libspdk_vhost.a 00:02:44.689 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:44.689 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:44.689 CC lib/ftl/base/ftl_base_dev.o 00:02:44.689 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.689 SO libspdk_vhost.so.8.0 00:02:44.689 CC lib/ftl/ftl_trace.o 00:02:44.947 SYMLINK libspdk_vhost.so 00:02:45.206 LIB libspdk_nvmf.a 00:02:45.206 LIB libspdk_ftl.a 00:02:45.206 SO libspdk_nvmf.so.20.0 00:02:45.487 SO libspdk_ftl.so.9.0 00:02:45.487 SYMLINK libspdk_nvmf.so 00:02:45.745 SYMLINK libspdk_ftl.so 00:02:46.004 CC module/env_dpdk/env_dpdk_rpc.o 00:02:46.004 CC module/scheduler/gscheduler/gscheduler.o 00:02:46.004 CC module/keyring/linux/keyring.o 00:02:46.004 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:46.004 CC module/accel/error/accel_error.o 00:02:46.263 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:46.263 CC module/blob/bdev/blob_bdev.o 00:02:46.263 CC module/fsdev/aio/fsdev_aio.o 00:02:46.263 CC module/sock/posix/posix.o 00:02:46.263 CC module/keyring/file/keyring.o 00:02:46.263 LIB libspdk_env_dpdk_rpc.a 00:02:46.263 SO libspdk_env_dpdk_rpc.so.6.0 00:02:46.263 SYMLINK libspdk_env_dpdk_rpc.so 00:02:46.263 CC module/keyring/file/keyring_rpc.o 00:02:46.263 CC module/keyring/linux/keyring_rpc.o 00:02:46.263 LIB libspdk_scheduler_gscheduler.a 00:02:46.263 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:46.263 LIB libspdk_scheduler_dpdk_governor.a 00:02:46.263 SO libspdk_scheduler_gscheduler.so.4.0 00:02:46.263 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:46.263 LIB libspdk_scheduler_dynamic.a 00:02:46.263 CC module/accel/error/accel_error_rpc.o 00:02:46.521 SO libspdk_scheduler_dynamic.so.4.0 00:02:46.521 SYMLINK libspdk_scheduler_gscheduler.so 00:02:46.521 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:46.521 CC module/fsdev/aio/linux_aio_mgr.o 00:02:46.521 LIB libspdk_keyring_linux.a 00:02:46.521 LIB libspdk_keyring_file.a 00:02:46.521 SYMLINK libspdk_scheduler_dynamic.so 00:02:46.521 SO libspdk_keyring_linux.so.1.0 00:02:46.521 SO libspdk_keyring_file.so.2.0 00:02:46.521 LIB libspdk_blob_bdev.a 00:02:46.521 SO libspdk_blob_bdev.so.11.0 00:02:46.521 SYMLINK libspdk_keyring_linux.so 00:02:46.521 SYMLINK libspdk_keyring_file.so 00:02:46.521 LIB libspdk_accel_error.a 00:02:46.521 SYMLINK libspdk_blob_bdev.so 00:02:46.521 SO libspdk_accel_error.so.2.0 00:02:46.521 CC module/accel/ioat/accel_ioat.o 00:02:46.521 CC module/accel/ioat/accel_ioat_rpc.o 00:02:46.521 CC module/accel/dsa/accel_dsa.o 00:02:46.521 SYMLINK libspdk_accel_error.so 00:02:46.521 CC module/accel/dsa/accel_dsa_rpc.o 00:02:46.780 CC module/accel/iaa/accel_iaa.o 00:02:46.780 LIB libspdk_accel_ioat.a 00:02:46.780 CC module/bdev/delay/vbdev_delay.o 00:02:46.780 CC module/bdev/error/vbdev_error.o 00:02:46.780 CC module/blobfs/bdev/blobfs_bdev.o 00:02:46.780 SO libspdk_accel_ioat.so.6.0 00:02:47.038 CC module/bdev/gpt/gpt.o 00:02:47.038 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.038 SYMLINK libspdk_accel_ioat.so 00:02:47.038 CC module/bdev/lvol/vbdev_lvol.o 00:02:47.038 CC module/bdev/gpt/vbdev_gpt.o 00:02:47.038 LIB libspdk_accel_dsa.a 00:02:47.038 SO libspdk_accel_dsa.so.5.0 00:02:47.038 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:47.038 LIB libspdk_fsdev_aio.a 00:02:47.038 LIB libspdk_sock_posix.a 00:02:47.038 LIB libspdk_accel_iaa.a 00:02:47.038 SO libspdk_fsdev_aio.so.1.0 00:02:47.038 SYMLINK libspdk_accel_dsa.so 00:02:47.038 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:47.038 SO libspdk_accel_iaa.so.3.0 00:02:47.038 SO libspdk_sock_posix.so.6.0 00:02:47.297 CC module/bdev/error/vbdev_error_rpc.o 00:02:47.297 SYMLINK libspdk_accel_iaa.so 00:02:47.297 SYMLINK libspdk_fsdev_aio.so 00:02:47.297 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:47.297 SYMLINK libspdk_sock_posix.so 00:02:47.297 LIB libspdk_blobfs_bdev.a 00:02:47.297 SO libspdk_blobfs_bdev.so.6.0 00:02:47.297 LIB libspdk_bdev_gpt.a 00:02:47.297 LIB libspdk_bdev_delay.a 00:02:47.297 SYMLINK libspdk_blobfs_bdev.so 00:02:47.297 SO libspdk_bdev_gpt.so.6.0 00:02:47.297 LIB libspdk_bdev_error.a 00:02:47.297 SO libspdk_bdev_delay.so.6.0 00:02:47.297 CC module/bdev/malloc/bdev_malloc.o 00:02:47.297 CC module/bdev/null/bdev_null.o 00:02:47.297 SO libspdk_bdev_error.so.6.0 00:02:47.555 CC module/bdev/nvme/bdev_nvme.o 00:02:47.555 SYMLINK libspdk_bdev_gpt.so 00:02:47.555 SYMLINK libspdk_bdev_delay.so 00:02:47.555 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:47.555 SYMLINK libspdk_bdev_error.so 00:02:47.555 CC module/bdev/passthru/vbdev_passthru.o 00:02:47.555 CC module/bdev/raid/bdev_raid.o 00:02:47.555 CC module/bdev/raid/bdev_raid_rpc.o 00:02:47.555 CC module/bdev/split/vbdev_split.o 00:02:47.555 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:47.555 CC module/bdev/raid/bdev_raid_sb.o 00:02:47.555 LIB libspdk_bdev_lvol.a 00:02:47.813 SO libspdk_bdev_lvol.so.6.0 00:02:47.813 CC module/bdev/null/bdev_null_rpc.o 00:02:47.813 SYMLINK libspdk_bdev_lvol.so 00:02:47.813 CC module/bdev/raid/raid0.o 00:02:47.813 LIB libspdk_bdev_malloc.a 00:02:47.813 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:47.813 SO libspdk_bdev_malloc.so.6.0 00:02:47.813 CC module/bdev/raid/raid1.o 00:02:47.813 CC module/bdev/split/vbdev_split_rpc.o 00:02:47.813 LIB libspdk_bdev_null.a 00:02:48.073 SO libspdk_bdev_null.so.6.0 00:02:48.073 SYMLINK libspdk_bdev_malloc.so 00:02:48.073 CC module/bdev/raid/concat.o 00:02:48.073 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.073 SYMLINK libspdk_bdev_null.so 00:02:48.073 CC module/bdev/raid/raid5f.o 00:02:48.073 LIB libspdk_bdev_passthru.a 00:02:48.073 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.073 LIB libspdk_bdev_split.a 00:02:48.073 SO libspdk_bdev_passthru.so.6.0 00:02:48.073 CC module/bdev/nvme/nvme_rpc.o 00:02:48.073 SO libspdk_bdev_split.so.6.0 00:02:48.073 SYMLINK libspdk_bdev_passthru.so 00:02:48.332 SYMLINK libspdk_bdev_split.so 00:02:48.332 LIB libspdk_bdev_zone_block.a 00:02:48.332 SO libspdk_bdev_zone_block.so.6.0 00:02:48.332 CC module/bdev/aio/bdev_aio.o 00:02:48.332 SYMLINK libspdk_bdev_zone_block.so 00:02:48.332 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.332 CC module/bdev/ftl/bdev_ftl.o 00:02:48.332 CC module/bdev/nvme/vbdev_opal.o 00:02:48.332 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.332 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.590 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.590 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.590 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.905 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.905 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.905 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.905 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.905 LIB libspdk_bdev_raid.a 00:02:48.905 LIB libspdk_bdev_aio.a 00:02:48.906 LIB libspdk_bdev_ftl.a 00:02:48.906 SO libspdk_bdev_raid.so.6.0 00:02:49.177 SO libspdk_bdev_ftl.so.6.0 00:02:49.177 SO libspdk_bdev_aio.so.6.0 00:02:49.177 LIB libspdk_bdev_iscsi.a 00:02:49.177 SYMLINK libspdk_bdev_ftl.so 00:02:49.177 SO libspdk_bdev_iscsi.so.6.0 00:02:49.177 SYMLINK libspdk_bdev_raid.so 00:02:49.177 LIB libspdk_bdev_virtio.a 00:02:49.177 SYMLINK libspdk_bdev_aio.so 00:02:49.177 SO libspdk_bdev_virtio.so.6.0 00:02:49.177 SYMLINK libspdk_bdev_iscsi.so 00:02:49.177 SYMLINK libspdk_bdev_virtio.so 00:02:51.078 LIB libspdk_bdev_nvme.a 00:02:51.078 SO libspdk_bdev_nvme.so.7.1 00:02:51.078 SYMLINK libspdk_bdev_nvme.so 00:02:51.645 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.645 CC module/event/subsystems/fsdev/fsdev.o 00:02:51.645 CC module/event/subsystems/vmd/vmd.o 00:02:51.645 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.645 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.645 CC module/event/subsystems/keyring/keyring.o 00:02:51.645 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.645 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.645 CC module/event/subsystems/sock/sock.o 00:02:51.645 LIB libspdk_event_vmd.a 00:02:51.645 LIB libspdk_event_vhost_blk.a 00:02:51.645 LIB libspdk_event_fsdev.a 00:02:51.645 LIB libspdk_event_keyring.a 00:02:51.904 LIB libspdk_event_scheduler.a 00:02:51.904 SO libspdk_event_vmd.so.6.0 00:02:51.904 LIB libspdk_event_iobuf.a 00:02:51.904 SO libspdk_event_fsdev.so.1.0 00:02:51.904 SO libspdk_event_keyring.so.1.0 00:02:51.904 SO libspdk_event_vhost_blk.so.3.0 00:02:51.904 LIB libspdk_event_sock.a 00:02:51.904 SO libspdk_event_scheduler.so.4.0 00:02:51.904 SO libspdk_event_iobuf.so.3.0 00:02:51.904 SO libspdk_event_sock.so.5.0 00:02:51.904 SYMLINK libspdk_event_fsdev.so 00:02:51.904 SYMLINK libspdk_event_vmd.so 00:02:51.904 SYMLINK libspdk_event_vhost_blk.so 00:02:51.904 SYMLINK libspdk_event_keyring.so 00:02:51.904 SYMLINK libspdk_event_scheduler.so 00:02:51.904 SYMLINK libspdk_event_sock.so 00:02:51.904 SYMLINK libspdk_event_iobuf.so 00:02:52.163 CC module/event/subsystems/accel/accel.o 00:02:52.421 LIB libspdk_event_accel.a 00:02:52.421 SO libspdk_event_accel.so.6.0 00:02:52.421 SYMLINK libspdk_event_accel.so 00:02:52.679 CC module/event/subsystems/bdev/bdev.o 00:02:52.938 LIB libspdk_event_bdev.a 00:02:52.938 SO libspdk_event_bdev.so.6.0 00:02:53.197 SYMLINK libspdk_event_bdev.so 00:02:53.455 CC module/event/subsystems/ublk/ublk.o 00:02:53.455 CC module/event/subsystems/nbd/nbd.o 00:02:53.455 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.455 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.455 CC module/event/subsystems/scsi/scsi.o 00:02:53.455 LIB libspdk_event_nbd.a 00:02:53.455 LIB libspdk_event_ublk.a 00:02:53.455 SO libspdk_event_nbd.so.6.0 00:02:53.455 LIB libspdk_event_scsi.a 00:02:53.455 SO libspdk_event_ublk.so.3.0 00:02:53.714 SO libspdk_event_scsi.so.6.0 00:02:53.714 SYMLINK libspdk_event_nbd.so 00:02:53.714 SYMLINK libspdk_event_ublk.so 00:02:53.714 SYMLINK libspdk_event_scsi.so 00:02:53.714 LIB libspdk_event_nvmf.a 00:02:53.714 SO libspdk_event_nvmf.so.6.0 00:02:53.714 SYMLINK libspdk_event_nvmf.so 00:02:53.973 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.973 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.973 LIB libspdk_event_vhost_scsi.a 00:02:54.231 LIB libspdk_event_iscsi.a 00:02:54.231 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.231 SO libspdk_event_iscsi.so.6.0 00:02:54.231 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.231 SYMLINK libspdk_event_iscsi.so 00:02:54.489 SO libspdk.so.6.0 00:02:54.489 SYMLINK libspdk.so 00:02:54.489 CC app/trace_record/trace_record.o 00:02:54.489 CC app/spdk_nvme_identify/identify.o 00:02:54.489 CXX app/trace/trace.o 00:02:54.489 CC app/spdk_lspci/spdk_lspci.o 00:02:54.747 CC app/spdk_nvme_perf/perf.o 00:02:54.747 CC app/nvmf_tgt/nvmf_main.o 00:02:54.747 CC app/iscsi_tgt/iscsi_tgt.o 00:02:54.747 CC app/spdk_tgt/spdk_tgt.o 00:02:54.747 CC examples/util/zipf/zipf.o 00:02:54.747 CC test/thread/poller_perf/poller_perf.o 00:02:54.747 LINK spdk_lspci 00:02:55.005 LINK nvmf_tgt 00:02:55.005 LINK spdk_trace_record 00:02:55.005 LINK iscsi_tgt 00:02:55.005 LINK zipf 00:02:55.005 LINK poller_perf 00:02:55.005 LINK spdk_tgt 00:02:55.005 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.296 LINK spdk_trace 00:02:55.296 CC app/spdk_top/spdk_top.o 00:02:55.296 CC examples/ioat/perf/perf.o 00:02:55.296 LINK spdk_nvme_discover 00:02:55.296 CC examples/vmd/lsvmd/lsvmd.o 00:02:55.296 CC app/spdk_dd/spdk_dd.o 00:02:55.296 CC test/dma/test_dma/test_dma.o 00:02:55.553 CC examples/idxd/perf/perf.o 00:02:55.553 LINK lsvmd 00:02:55.553 CC test/app/bdev_svc/bdev_svc.o 00:02:55.553 LINK ioat_perf 00:02:55.811 CC app/fio/nvme/fio_plugin.o 00:02:55.811 LINK spdk_nvme_perf 00:02:55.811 LINK spdk_nvme_identify 00:02:55.811 LINK bdev_svc 00:02:55.811 CC examples/vmd/led/led.o 00:02:55.811 LINK spdk_dd 00:02:55.811 CC examples/ioat/verify/verify.o 00:02:55.811 LINK idxd_perf 00:02:56.069 LINK test_dma 00:02:56.069 LINK led 00:02:56.069 CC app/fio/bdev/fio_plugin.o 00:02:56.069 CC app/vhost/vhost.o 00:02:56.069 LINK verify 00:02:56.069 TEST_HEADER include/spdk/accel.h 00:02:56.069 TEST_HEADER include/spdk/accel_module.h 00:02:56.069 TEST_HEADER include/spdk/assert.h 00:02:56.069 TEST_HEADER include/spdk/barrier.h 00:02:56.069 TEST_HEADER include/spdk/base64.h 00:02:56.069 TEST_HEADER include/spdk/bdev.h 00:02:56.069 TEST_HEADER include/spdk/bdev_module.h 00:02:56.069 TEST_HEADER include/spdk/bdev_zone.h 00:02:56.069 TEST_HEADER include/spdk/bit_array.h 00:02:56.069 TEST_HEADER include/spdk/bit_pool.h 00:02:56.069 TEST_HEADER include/spdk/blob_bdev.h 00:02:56.069 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:56.069 TEST_HEADER include/spdk/blobfs.h 00:02:56.328 TEST_HEADER include/spdk/blob.h 00:02:56.328 TEST_HEADER include/spdk/conf.h 00:02:56.328 TEST_HEADER include/spdk/config.h 00:02:56.328 TEST_HEADER include/spdk/cpuset.h 00:02:56.328 TEST_HEADER include/spdk/crc16.h 00:02:56.328 TEST_HEADER include/spdk/crc32.h 00:02:56.328 TEST_HEADER include/spdk/crc64.h 00:02:56.328 TEST_HEADER include/spdk/dif.h 00:02:56.328 TEST_HEADER include/spdk/dma.h 00:02:56.328 TEST_HEADER include/spdk/endian.h 00:02:56.328 TEST_HEADER include/spdk/env_dpdk.h 00:02:56.328 TEST_HEADER include/spdk/env.h 00:02:56.328 TEST_HEADER include/spdk/event.h 00:02:56.328 TEST_HEADER include/spdk/fd_group.h 00:02:56.328 TEST_HEADER include/spdk/fd.h 00:02:56.328 TEST_HEADER include/spdk/file.h 00:02:56.328 TEST_HEADER include/spdk/fsdev.h 00:02:56.328 TEST_HEADER include/spdk/fsdev_module.h 00:02:56.328 TEST_HEADER include/spdk/ftl.h 00:02:56.328 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.328 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:56.328 TEST_HEADER include/spdk/gpt_spec.h 00:02:56.328 TEST_HEADER include/spdk/hexlify.h 00:02:56.328 TEST_HEADER include/spdk/histogram_data.h 00:02:56.328 TEST_HEADER include/spdk/idxd.h 00:02:56.328 TEST_HEADER include/spdk/idxd_spec.h 00:02:56.328 TEST_HEADER include/spdk/init.h 00:02:56.328 TEST_HEADER include/spdk/ioat.h 00:02:56.328 TEST_HEADER include/spdk/ioat_spec.h 00:02:56.328 TEST_HEADER include/spdk/iscsi_spec.h 00:02:56.328 TEST_HEADER include/spdk/json.h 00:02:56.328 TEST_HEADER include/spdk/jsonrpc.h 00:02:56.328 TEST_HEADER include/spdk/keyring.h 00:02:56.328 TEST_HEADER include/spdk/keyring_module.h 00:02:56.328 TEST_HEADER include/spdk/likely.h 00:02:56.328 TEST_HEADER include/spdk/log.h 00:02:56.328 CC test/app/histogram_perf/histogram_perf.o 00:02:56.328 TEST_HEADER include/spdk/lvol.h 00:02:56.328 TEST_HEADER include/spdk/md5.h 00:02:56.328 TEST_HEADER include/spdk/memory.h 00:02:56.328 TEST_HEADER include/spdk/mmio.h 00:02:56.328 TEST_HEADER include/spdk/nbd.h 00:02:56.328 TEST_HEADER include/spdk/net.h 00:02:56.328 TEST_HEADER include/spdk/notify.h 00:02:56.328 TEST_HEADER include/spdk/nvme.h 00:02:56.328 TEST_HEADER include/spdk/nvme_intel.h 00:02:56.328 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:56.328 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:56.328 TEST_HEADER include/spdk/nvme_spec.h 00:02:56.328 TEST_HEADER include/spdk/nvme_zns.h 00:02:56.328 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:56.328 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:56.328 TEST_HEADER include/spdk/nvmf.h 00:02:56.328 TEST_HEADER include/spdk/nvmf_spec.h 00:02:56.328 TEST_HEADER include/spdk/nvmf_transport.h 00:02:56.328 TEST_HEADER include/spdk/opal.h 00:02:56.328 TEST_HEADER include/spdk/opal_spec.h 00:02:56.328 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.328 TEST_HEADER include/spdk/pci_ids.h 00:02:56.328 TEST_HEADER include/spdk/pipe.h 00:02:56.328 TEST_HEADER include/spdk/queue.h 00:02:56.328 TEST_HEADER include/spdk/reduce.h 00:02:56.328 TEST_HEADER include/spdk/rpc.h 00:02:56.328 TEST_HEADER include/spdk/scheduler.h 00:02:56.328 TEST_HEADER include/spdk/scsi.h 00:02:56.328 TEST_HEADER include/spdk/scsi_spec.h 00:02:56.328 TEST_HEADER include/spdk/sock.h 00:02:56.328 TEST_HEADER include/spdk/stdinc.h 00:02:56.328 TEST_HEADER include/spdk/string.h 00:02:56.328 TEST_HEADER include/spdk/thread.h 00:02:56.328 TEST_HEADER include/spdk/trace.h 00:02:56.328 LINK vhost 00:02:56.328 TEST_HEADER include/spdk/trace_parser.h 00:02:56.328 TEST_HEADER include/spdk/tree.h 00:02:56.328 TEST_HEADER include/spdk/ublk.h 00:02:56.328 TEST_HEADER include/spdk/util.h 00:02:56.328 TEST_HEADER include/spdk/uuid.h 00:02:56.328 TEST_HEADER include/spdk/version.h 00:02:56.328 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:56.328 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:56.328 TEST_HEADER include/spdk/vhost.h 00:02:56.328 TEST_HEADER include/spdk/vmd.h 00:02:56.328 TEST_HEADER include/spdk/xor.h 00:02:56.328 TEST_HEADER include/spdk/zipf.h 00:02:56.328 CXX test/cpp_headers/accel.o 00:02:56.328 LINK spdk_top 00:02:56.329 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:56.329 CC test/event/event_perf/event_perf.o 00:02:56.329 LINK spdk_nvme 00:02:56.587 LINK histogram_perf 00:02:56.587 CXX test/cpp_headers/accel_module.o 00:02:56.587 LINK event_perf 00:02:56.587 LINK interrupt_tgt 00:02:56.587 CC test/event/reactor/reactor.o 00:02:56.587 CC test/event/reactor_perf/reactor_perf.o 00:02:56.587 CC test/event/app_repeat/app_repeat.o 00:02:56.587 CXX test/cpp_headers/assert.o 00:02:56.846 CC test/event/scheduler/scheduler.o 00:02:56.846 LINK nvme_fuzz 00:02:56.846 LINK spdk_bdev 00:02:56.846 LINK reactor 00:02:56.846 LINK reactor_perf 00:02:56.846 LINK app_repeat 00:02:56.846 CXX test/cpp_headers/barrier.o 00:02:56.846 CC test/nvme/aer/aer.o 00:02:57.104 CC test/nvme/reset/reset.o 00:02:57.104 LINK mem_callbacks 00:02:57.104 CXX test/cpp_headers/base64.o 00:02:57.104 LINK scheduler 00:02:57.104 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:57.104 CC examples/thread/thread/thread_ex.o 00:02:57.104 CC test/nvme/sgl/sgl.o 00:02:57.104 CC test/nvme/e2edp/nvme_dp.o 00:02:57.104 CXX test/cpp_headers/bdev.o 00:02:57.104 CC test/nvme/overhead/overhead.o 00:02:57.104 CC test/env/vtophys/vtophys.o 00:02:57.362 LINK reset 00:02:57.362 LINK aer 00:02:57.362 CC test/rpc_client/rpc_client_test.o 00:02:57.362 LINK thread 00:02:57.362 LINK vtophys 00:02:57.362 CXX test/cpp_headers/bdev_module.o 00:02:57.362 LINK sgl 00:02:57.621 LINK nvme_dp 00:02:57.621 LINK overhead 00:02:57.621 LINK rpc_client_test 00:02:57.621 CXX test/cpp_headers/bdev_zone.o 00:02:57.621 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.621 CC test/accel/dif/dif.o 00:02:57.879 CC test/blobfs/mkfs/mkfs.o 00:02:57.879 CC examples/sock/hello_world/hello_sock.o 00:02:57.879 CC test/nvme/err_injection/err_injection.o 00:02:57.879 LINK env_dpdk_post_init 00:02:57.879 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:57.879 CXX test/cpp_headers/bit_array.o 00:02:57.879 CC examples/accel/perf/accel_perf.o 00:02:57.879 LINK mkfs 00:02:57.879 CC examples/blob/hello_world/hello_blob.o 00:02:58.138 CXX test/cpp_headers/bit_pool.o 00:02:58.138 LINK err_injection 00:02:58.138 LINK hello_sock 00:02:58.138 CC test/env/memory/memory_ut.o 00:02:58.138 CXX test/cpp_headers/blob_bdev.o 00:02:58.138 LINK hello_fsdev 00:02:58.397 LINK hello_blob 00:02:58.397 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.397 CC test/nvme/startup/startup.o 00:02:58.397 CC test/env/pci/pci_ut.o 00:02:58.397 CXX test/cpp_headers/blobfs.o 00:02:58.655 CC examples/blob/cli/blobcli.o 00:02:58.655 LINK accel_perf 00:02:58.655 CC test/nvme/reserve/reserve.o 00:02:58.655 LINK startup 00:02:58.655 LINK dif 00:02:58.655 CXX test/cpp_headers/blob.o 00:02:58.655 CC test/nvme/simple_copy/simple_copy.o 00:02:58.655 CXX test/cpp_headers/conf.o 00:02:58.913 CXX test/cpp_headers/config.o 00:02:58.913 LINK reserve 00:02:58.913 CC test/nvme/connect_stress/connect_stress.o 00:02:58.913 LINK pci_ut 00:02:58.913 CC test/nvme/boot_partition/boot_partition.o 00:02:58.913 LINK simple_copy 00:02:58.913 CXX test/cpp_headers/cpuset.o 00:02:58.913 CC test/nvme/compliance/nvme_compliance.o 00:02:58.913 CXX test/cpp_headers/crc16.o 00:02:59.171 CXX test/cpp_headers/crc32.o 00:02:59.171 LINK boot_partition 00:02:59.172 LINK connect_stress 00:02:59.172 LINK blobcli 00:02:59.172 CXX test/cpp_headers/crc64.o 00:02:59.172 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.430 LINK iscsi_fuzz 00:02:59.430 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.430 LINK nvme_compliance 00:02:59.430 CXX test/cpp_headers/dif.o 00:02:59.430 CC test/nvme/cuse/cuse.o 00:02:59.430 CC test/nvme/fdp/fdp.o 00:02:59.430 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.430 LINK fused_ordering 00:02:59.430 LINK doorbell_aers 00:02:59.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.688 CXX test/cpp_headers/dma.o 00:02:59.688 CC examples/nvme/hello_world/hello_world.o 00:02:59.688 LINK memory_ut 00:02:59.688 CC test/app/jsoncat/jsoncat.o 00:02:59.688 CXX test/cpp_headers/endian.o 00:02:59.688 CC test/app/stub/stub.o 00:02:59.688 CC examples/nvme/reconnect/reconnect.o 00:02:59.688 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.946 LINK jsoncat 00:02:59.946 LINK fdp 00:02:59.946 CXX test/cpp_headers/env_dpdk.o 00:02:59.946 LINK hello_world 00:02:59.946 LINK stub 00:02:59.946 CC examples/nvme/arbitration/arbitration.o 00:02:59.946 CXX test/cpp_headers/env.o 00:03:00.205 CC examples/nvme/hotplug/hotplug.o 00:03:00.205 LINK vhost_fuzz 00:03:00.205 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.205 CC examples/nvme/abort/abort.o 00:03:00.205 CXX test/cpp_headers/event.o 00:03:00.205 LINK reconnect 00:03:00.205 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.205 LINK cmb_copy 00:03:00.463 LINK hotplug 00:03:00.463 LINK arbitration 00:03:00.463 CXX test/cpp_headers/fd_group.o 00:03:00.463 CXX test/cpp_headers/fd.o 00:03:00.463 LINK nvme_manage 00:03:00.463 LINK pmr_persistence 00:03:00.463 CXX test/cpp_headers/file.o 00:03:00.463 CC test/lvol/esnap/esnap.o 00:03:00.721 CXX test/cpp_headers/fsdev.o 00:03:00.721 CXX test/cpp_headers/fsdev_module.o 00:03:00.721 LINK abort 00:03:00.721 CXX test/cpp_headers/ftl.o 00:03:00.721 CXX test/cpp_headers/fuse_dispatcher.o 00:03:00.721 CXX test/cpp_headers/gpt_spec.o 00:03:00.721 CXX test/cpp_headers/hexlify.o 00:03:00.721 CC test/bdev/bdevio/bdevio.o 00:03:00.721 CXX test/cpp_headers/histogram_data.o 00:03:00.721 CXX test/cpp_headers/idxd.o 00:03:00.979 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.979 CXX test/cpp_headers/idxd_spec.o 00:03:00.979 CXX test/cpp_headers/init.o 00:03:00.979 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.979 CXX test/cpp_headers/ioat.o 00:03:00.979 CXX test/cpp_headers/ioat_spec.o 00:03:00.979 CXX test/cpp_headers/iscsi_spec.o 00:03:00.979 LINK cuse 00:03:00.979 CXX test/cpp_headers/json.o 00:03:00.979 CXX test/cpp_headers/jsonrpc.o 00:03:01.239 LINK hello_bdev 00:03:01.239 CXX test/cpp_headers/keyring.o 00:03:01.239 CXX test/cpp_headers/keyring_module.o 00:03:01.239 CXX test/cpp_headers/likely.o 00:03:01.239 CXX test/cpp_headers/log.o 00:03:01.239 CXX test/cpp_headers/lvol.o 00:03:01.239 LINK bdevio 00:03:01.239 CXX test/cpp_headers/md5.o 00:03:01.239 CXX test/cpp_headers/memory.o 00:03:01.497 CXX test/cpp_headers/mmio.o 00:03:01.497 CXX test/cpp_headers/nbd.o 00:03:01.497 CXX test/cpp_headers/net.o 00:03:01.497 CXX test/cpp_headers/notify.o 00:03:01.497 CXX test/cpp_headers/nvme.o 00:03:01.497 CXX test/cpp_headers/nvme_intel.o 00:03:01.497 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.497 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.497 CXX test/cpp_headers/nvme_spec.o 00:03:01.497 CXX test/cpp_headers/nvme_zns.o 00:03:01.755 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.755 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.755 CXX test/cpp_headers/nvmf.o 00:03:01.755 CXX test/cpp_headers/nvmf_spec.o 00:03:01.755 CXX test/cpp_headers/nvmf_transport.o 00:03:01.755 CXX test/cpp_headers/opal.o 00:03:01.755 CXX test/cpp_headers/opal_spec.o 00:03:01.755 CXX test/cpp_headers/pci_ids.o 00:03:01.755 CXX test/cpp_headers/pipe.o 00:03:02.014 CXX test/cpp_headers/queue.o 00:03:02.014 CXX test/cpp_headers/reduce.o 00:03:02.014 CXX test/cpp_headers/rpc.o 00:03:02.014 CXX test/cpp_headers/scheduler.o 00:03:02.014 CXX test/cpp_headers/scsi.o 00:03:02.014 CXX test/cpp_headers/scsi_spec.o 00:03:02.014 CXX test/cpp_headers/sock.o 00:03:02.014 CXX test/cpp_headers/stdinc.o 00:03:02.014 LINK bdevperf 00:03:02.014 CXX test/cpp_headers/string.o 00:03:02.014 CXX test/cpp_headers/thread.o 00:03:02.014 CXX test/cpp_headers/trace.o 00:03:02.014 CXX test/cpp_headers/trace_parser.o 00:03:02.273 CXX test/cpp_headers/tree.o 00:03:02.273 CXX test/cpp_headers/ublk.o 00:03:02.273 CXX test/cpp_headers/util.o 00:03:02.273 CXX test/cpp_headers/uuid.o 00:03:02.273 CXX test/cpp_headers/version.o 00:03:02.273 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.273 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.273 CXX test/cpp_headers/vhost.o 00:03:02.273 CXX test/cpp_headers/vmd.o 00:03:02.273 CXX test/cpp_headers/xor.o 00:03:02.273 CXX test/cpp_headers/zipf.o 00:03:02.531 CC examples/nvmf/nvmf/nvmf.o 00:03:02.789 LINK nvmf 00:03:08.059 LINK esnap 00:03:08.682 00:03:08.682 real 1m39.148s 00:03:08.682 user 9m3.091s 00:03:08.682 sys 1m42.815s 00:03:08.682 16:55:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.682 ************************************ 00:03:08.682 END TEST make 00:03:08.682 ************************************ 00:03:08.682 16:55:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:08.682 16:55:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:08.682 16:55:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:08.682 16:55:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:08.682 16:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.682 16:55:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:08.682 16:55:32 -- pm/common@44 -- $ pid=5246 00:03:08.682 16:55:32 -- pm/common@50 -- $ kill -TERM 5246 00:03:08.682 16:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.682 16:55:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:08.682 16:55:32 -- pm/common@44 -- $ pid=5247 00:03:08.682 16:55:32 -- pm/common@50 -- $ kill -TERM 5247 00:03:08.682 16:55:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:08.682 16:55:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:08.682 16:55:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:08.682 16:55:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:08.682 16:55:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:08.682 16:55:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:08.682 16:55:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:08.682 16:55:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:08.682 16:55:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:08.682 16:55:32 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.682 16:55:32 -- scripts/common.sh@336 -- # read -ra ver1 00:03:08.682 16:55:32 -- scripts/common.sh@337 -- # IFS=.-: 00:03:08.682 16:55:32 -- scripts/common.sh@337 -- # read -ra ver2 00:03:08.682 16:55:32 -- scripts/common.sh@338 -- # local 'op=<' 00:03:08.682 16:55:32 -- scripts/common.sh@340 -- # ver1_l=2 00:03:08.682 16:55:32 -- scripts/common.sh@341 -- # ver2_l=1 00:03:08.682 16:55:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:08.682 16:55:32 -- scripts/common.sh@344 -- # case "$op" in 00:03:08.682 16:55:32 -- scripts/common.sh@345 -- # : 1 00:03:08.682 16:55:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:08.682 16:55:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.682 16:55:32 -- scripts/common.sh@365 -- # decimal 1 00:03:08.682 16:55:32 -- scripts/common.sh@353 -- # local d=1 00:03:08.682 16:55:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.682 16:55:32 -- scripts/common.sh@355 -- # echo 1 00:03:08.682 16:55:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:08.682 16:55:32 -- scripts/common.sh@366 -- # decimal 2 00:03:08.682 16:55:32 -- scripts/common.sh@353 -- # local d=2 00:03:08.682 16:55:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.682 16:55:32 -- scripts/common.sh@355 -- # echo 2 00:03:08.682 16:55:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:08.682 16:55:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:08.682 16:55:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:08.682 16:55:32 -- scripts/common.sh@368 -- # return 0 00:03:08.682 16:55:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.682 16:55:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.682 --rc genhtml_branch_coverage=1 00:03:08.682 --rc genhtml_function_coverage=1 00:03:08.682 --rc genhtml_legend=1 00:03:08.682 --rc geninfo_all_blocks=1 00:03:08.682 --rc geninfo_unexecuted_blocks=1 00:03:08.682 00:03:08.682 ' 00:03:08.682 16:55:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.682 --rc genhtml_branch_coverage=1 00:03:08.682 --rc genhtml_function_coverage=1 00:03:08.682 --rc genhtml_legend=1 00:03:08.682 --rc geninfo_all_blocks=1 00:03:08.682 --rc geninfo_unexecuted_blocks=1 00:03:08.682 00:03:08.682 ' 00:03:08.682 16:55:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.682 --rc genhtml_branch_coverage=1 00:03:08.682 --rc genhtml_function_coverage=1 00:03:08.682 --rc genhtml_legend=1 00:03:08.682 --rc geninfo_all_blocks=1 00:03:08.682 --rc geninfo_unexecuted_blocks=1 00:03:08.682 00:03:08.682 ' 00:03:08.682 16:55:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:08.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.682 --rc genhtml_branch_coverage=1 00:03:08.682 --rc genhtml_function_coverage=1 00:03:08.682 --rc genhtml_legend=1 00:03:08.682 --rc geninfo_all_blocks=1 00:03:08.682 --rc geninfo_unexecuted_blocks=1 00:03:08.682 00:03:08.682 ' 00:03:08.682 16:55:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:08.682 16:55:32 -- nvmf/common.sh@7 -- # uname -s 00:03:08.682 16:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.682 16:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.682 16:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.682 16:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.682 16:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.682 16:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.682 16:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.682 16:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.682 16:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.682 16:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.682 16:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3649dedb-1a77-4be6-960e-e1a7d201f91a 00:03:08.682 16:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=3649dedb-1a77-4be6-960e-e1a7d201f91a 00:03:08.682 16:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.682 16:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.682 16:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:08.682 16:55:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:08.682 16:55:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:08.682 16:55:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:08.682 16:55:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.682 16:55:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.682 16:55:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.682 16:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.682 16:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.683 16:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.683 16:55:32 -- paths/export.sh@5 -- # export PATH 00:03:08.683 16:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.683 16:55:32 -- nvmf/common.sh@51 -- # : 0 00:03:08.683 16:55:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:08.683 16:55:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:08.683 16:55:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:08.683 16:55:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.683 16:55:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.683 16:55:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:08.683 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:08.683 16:55:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:08.683 16:55:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:08.683 16:55:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:08.683 16:55:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.683 16:55:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.683 16:55:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.683 16:55:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:08.683 16:55:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.683 16:55:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.683 16:55:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.683 16:55:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:08.683 16:55:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:08.683 16:55:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:08.683 16:55:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54309 00:03:08.683 16:55:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:08.683 16:55:32 -- pm/common@17 -- # local monitor 00:03:08.683 16:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.683 16:55:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:08.683 16:55:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.683 16:55:32 -- pm/common@21 -- # date +%s 00:03:08.683 16:55:32 -- pm/common@25 -- # sleep 1 00:03:08.683 16:55:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732121732 00:03:08.683 16:55:32 -- pm/common@21 -- # date +%s 00:03:08.942 16:55:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732121732 00:03:08.942 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732121732_collect-vmstat.pm.log 00:03:08.942 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732121732_collect-cpu-load.pm.log 00:03:09.879 16:55:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:09.879 16:55:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:09.879 16:55:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:09.879 16:55:33 -- common/autotest_common.sh@10 -- # set +x 00:03:09.879 16:55:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:09.879 16:55:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:09.879 16:55:33 -- common/autotest_common.sh@10 -- # set +x 00:03:09.879 16:55:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:09.879 16:55:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:09.879 16:55:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:09.879 16:55:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:09.879 16:55:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:09.879 16:55:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:09.879 16:55:33 -- common/autotest_common.sh@1457 -- # uname 00:03:09.879 16:55:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:09.879 16:55:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:09.879 16:55:33 -- common/autotest_common.sh@1477 -- # uname 00:03:09.879 16:55:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:09.879 16:55:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:09.879 16:55:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:09.879 lcov: LCOV version 1.15 00:03:09.879 16:55:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.967 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.967 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.913 16:56:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.913 16:56:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.913 16:56:06 -- common/autotest_common.sh@10 -- # set +x 00:03:42.913 16:56:06 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.913 16:56:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.481 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:43.481 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:43.481 16:56:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:43.481 16:56:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:43.481 16:56:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:43.481 16:56:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:43.481 16:56:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.481 16:56:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:43.481 16:56:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:43.481 16:56:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.481 16:56:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:43.481 16:56:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:43.481 16:56:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.481 16:56:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:43.481 16:56:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:43.481 16:56:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:43.481 16:56:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:43.481 16:56:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:43.481 16:56:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:43.481 16:56:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:43.481 16:56:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:43.482 16:56:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.482 16:56:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.482 16:56:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:43.482 16:56:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:43.482 16:56:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.482 No valid GPT data, bailing 00:03:43.482 16:56:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.482 16:56:07 -- scripts/common.sh@394 -- # pt= 00:03:43.482 16:56:07 -- scripts/common.sh@395 -- # return 1 00:03:43.482 16:56:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:43.482 1+0 records in 00:03:43.482 1+0 records out 00:03:43.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00484245 s, 217 MB/s 00:03:43.482 16:56:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.482 16:56:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.482 16:56:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:43.482 16:56:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:43.482 16:56:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:43.482 No valid GPT data, bailing 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # pt= 00:03:43.740 16:56:07 -- scripts/common.sh@395 -- # return 1 00:03:43.740 16:56:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:43.740 1+0 records in 00:03:43.740 1+0 records out 00:03:43.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495293 s, 212 MB/s 00:03:43.740 16:56:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.740 16:56:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.740 16:56:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:43.740 16:56:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:43.740 16:56:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:43.740 No valid GPT data, bailing 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # pt= 00:03:43.740 16:56:07 -- scripts/common.sh@395 -- # return 1 00:03:43.740 16:56:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:43.740 1+0 records in 00:03:43.740 1+0 records out 00:03:43.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00340689 s, 308 MB/s 00:03:43.740 16:56:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.740 16:56:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:43.740 16:56:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:43.740 16:56:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:43.740 16:56:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:43.740 No valid GPT data, bailing 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:43.740 16:56:07 -- scripts/common.sh@394 -- # pt= 00:03:43.741 16:56:07 -- scripts/common.sh@395 -- # return 1 00:03:43.741 16:56:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:43.741 1+0 records in 00:03:43.741 1+0 records out 00:03:43.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476665 s, 220 MB/s 00:03:43.741 16:56:07 -- spdk/autotest.sh@105 -- # sync 00:03:44.000 16:56:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.000 16:56:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.000 16:56:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.904 16:56:09 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.904 16:56:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.904 16:56:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.904 16:56:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.472 Hugepages 00:03:46.472 node hugesize free / total 00:03:46.472 node0 1048576kB 0 / 0 00:03:46.472 node0 2048kB 0 / 0 00:03:46.472 00:03:46.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.731 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:46.731 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:46.731 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:46.731 16:56:10 -- spdk/autotest.sh@117 -- # uname -s 00:03:46.731 16:56:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:46.731 16:56:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:46.731 16:56:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.558 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.558 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.558 16:56:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:48.498 16:56:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:48.498 16:56:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:48.498 16:56:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.498 16:56:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:48.498 16:56:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.498 16:56:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.498 16:56:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.498 16:56:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:48.498 16:56:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.757 16:56:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:48.757 16:56:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:48.757 16:56:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.015 Waiting for block devices as requested 00:03:49.015 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.015 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.273 16:56:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.273 16:56:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:49.273 16:56:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:49.273 16:56:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:49.273 16:56:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:49.273 16:56:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.273 16:56:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.273 16:56:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.273 16:56:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.273 16:56:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.273 16:56:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.273 16:56:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:49.273 16:56:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.273 16:56:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.273 16:56:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.273 16:56:12 -- common/autotest_common.sh@1543 -- # continue 00:03:49.273 16:56:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.273 16:56:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:49.273 16:56:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.273 16:56:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:49.273 16:56:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:49.273 16:56:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.273 16:56:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.273 16:56:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:49.273 16:56:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.273 16:56:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.273 16:56:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:49.273 16:56:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.273 16:56:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.273 16:56:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.273 16:56:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.273 16:56:13 -- common/autotest_common.sh@1543 -- # continue 00:03:49.273 16:56:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:49.273 16:56:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.273 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:03:49.273 16:56:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:49.273 16:56:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.273 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:03:49.273 16:56:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.120 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.120 16:56:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.120 16:56:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.120 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:03:50.120 16:56:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:50.120 16:56:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:50.120 16:56:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.120 16:56:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:50.120 16:56:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:50.120 16:56:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:50.120 16:56:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:50.120 16:56:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:50.120 16:56:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:50.120 16:56:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:50.120 16:56:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.120 16:56:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.120 16:56:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:50.379 16:56:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:50.379 16:56:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:50.380 16:56:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.380 16:56:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:50.380 16:56:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.380 16:56:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.380 16:56:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:50.380 16:56:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:50.380 16:56:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:50.380 16:56:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.380 16:56:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:50.380 16:56:14 -- common/autotest_common.sh@1572 -- # return 0 00:03:50.380 16:56:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:50.380 16:56:14 -- common/autotest_common.sh@1580 -- # return 0 00:03:50.380 16:56:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.380 16:56:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.380 16:56:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.380 16:56:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.380 16:56:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.380 16:56:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.380 16:56:14 -- common/autotest_common.sh@10 -- # set +x 00:03:50.380 16:56:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.380 16:56:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.380 16:56:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.380 16:56:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.380 16:56:14 -- common/autotest_common.sh@10 -- # set +x 00:03:50.380 ************************************ 00:03:50.380 START TEST env 00:03:50.380 ************************************ 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.380 * Looking for test storage... 00:03:50.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.380 16:56:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.380 16:56:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.380 16:56:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.380 16:56:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.380 16:56:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.380 16:56:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.380 16:56:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.380 16:56:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.380 16:56:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.380 16:56:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.380 16:56:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.380 16:56:14 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.380 16:56:14 env -- scripts/common.sh@345 -- # : 1 00:03:50.380 16:56:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.380 16:56:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.380 16:56:14 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.380 16:56:14 env -- scripts/common.sh@353 -- # local d=1 00:03:50.380 16:56:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.380 16:56:14 env -- scripts/common.sh@355 -- # echo 1 00:03:50.380 16:56:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.380 16:56:14 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.380 16:56:14 env -- scripts/common.sh@353 -- # local d=2 00:03:50.380 16:56:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.380 16:56:14 env -- scripts/common.sh@355 -- # echo 2 00:03:50.380 16:56:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.380 16:56:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.380 16:56:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.380 16:56:14 env -- scripts/common.sh@368 -- # return 0 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.380 --rc genhtml_branch_coverage=1 00:03:50.380 --rc genhtml_function_coverage=1 00:03:50.380 --rc genhtml_legend=1 00:03:50.380 --rc geninfo_all_blocks=1 00:03:50.380 --rc geninfo_unexecuted_blocks=1 00:03:50.380 00:03:50.380 ' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.380 --rc genhtml_branch_coverage=1 00:03:50.380 --rc genhtml_function_coverage=1 00:03:50.380 --rc genhtml_legend=1 00:03:50.380 --rc geninfo_all_blocks=1 00:03:50.380 --rc geninfo_unexecuted_blocks=1 00:03:50.380 00:03:50.380 ' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.380 --rc genhtml_branch_coverage=1 00:03:50.380 --rc genhtml_function_coverage=1 00:03:50.380 --rc genhtml_legend=1 00:03:50.380 --rc geninfo_all_blocks=1 00:03:50.380 --rc geninfo_unexecuted_blocks=1 00:03:50.380 00:03:50.380 ' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.380 --rc genhtml_branch_coverage=1 00:03:50.380 --rc genhtml_function_coverage=1 00:03:50.380 --rc genhtml_legend=1 00:03:50.380 --rc geninfo_all_blocks=1 00:03:50.380 --rc geninfo_unexecuted_blocks=1 00:03:50.380 00:03:50.380 ' 00:03:50.380 16:56:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.380 16:56:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.380 16:56:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.380 ************************************ 00:03:50.380 START TEST env_memory 00:03:50.380 ************************************ 00:03:50.380 16:56:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.380 00:03:50.380 00:03:50.380 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.380 http://cunit.sourceforge.net/ 00:03:50.380 00:03:50.380 00:03:50.380 Suite: memory 00:03:50.640 Test: alloc and free memory map ...[2024-11-20 16:56:14.296786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.640 passed 00:03:50.640 Test: mem map translation ...[2024-11-20 16:56:14.357978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.640 [2024-11-20 16:56:14.358088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.640 [2024-11-20 16:56:14.358185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.640 [2024-11-20 16:56:14.358214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.640 passed 00:03:50.640 Test: mem map registration ...[2024-11-20 16:56:14.457300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:50.640 [2024-11-20 16:56:14.457419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:50.640 passed 00:03:50.899 Test: mem map adjacent registrations ...passed 00:03:50.899 00:03:50.899 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.899 suites 1 1 n/a 0 0 00:03:50.899 tests 4 4 4 0 0 00:03:50.899 asserts 152 152 152 0 n/a 00:03:50.899 00:03:50.899 Elapsed time = 0.345 seconds 00:03:50.899 00:03:50.899 real 0m0.386s 00:03:50.899 user 0m0.346s 00:03:50.899 sys 0m0.029s 00:03:50.899 16:56:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.899 16:56:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.899 ************************************ 00:03:50.899 END TEST env_memory 00:03:50.899 ************************************ 00:03:50.899 16:56:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.899 16:56:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.899 16:56:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.899 16:56:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.899 ************************************ 00:03:50.899 START TEST env_vtophys 00:03:50.899 ************************************ 00:03:50.899 16:56:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:50.899 EAL: lib.eal log level changed from notice to debug 00:03:50.899 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 1 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 2 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 3 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 4 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 5 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 6 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 7 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 8 as core 0 on socket 0 00:03:50.899 EAL: Detected lcore 9 as core 0 on socket 0 00:03:50.899 EAL: Maximum logical cores by configuration: 128 00:03:50.899 EAL: Detected CPU lcores: 10 00:03:50.899 EAL: Detected NUMA nodes: 1 00:03:50.899 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.899 EAL: Detected shared linkage of DPDK 00:03:50.899 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.899 EAL: Selected IOVA mode 'PA' 00:03:50.899 EAL: Probing VFIO support... 00:03:50.899 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:50.899 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:50.899 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.899 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.899 EAL: Setting up physically contiguous memory... 00:03:50.899 EAL: Setting maximum number of open files to 524288 00:03:50.899 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.899 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.899 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.899 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.899 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.899 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.899 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.899 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.899 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.899 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.899 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.899 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.899 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.899 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.899 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.899 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.899 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.899 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.899 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.899 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.899 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.899 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.899 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.899 EAL: Hugepages will be freed exactly as allocated. 00:03:50.899 EAL: No shared files mode enabled, IPC is disabled 00:03:51.158 EAL: No shared files mode enabled, IPC is disabled 00:03:51.158 EAL: TSC frequency is ~2200000 KHz 00:03:51.158 EAL: Main lcore 0 is ready (tid=7fca66479a40;cpuset=[0]) 00:03:51.158 EAL: Trying to obtain current memory policy. 00:03:51.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.158 EAL: Restoring previous memory policy: 0 00:03:51.158 EAL: request: mp_malloc_sync 00:03:51.158 EAL: No shared files mode enabled, IPC is disabled 00:03:51.158 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.158 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.158 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.158 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.158 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:51.158 00:03:51.158 00:03:51.158 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.158 http://cunit.sourceforge.net/ 00:03:51.158 00:03:51.158 00:03:51.158 Suite: components_suite 00:03:51.726 Test: vtophys_malloc_test ...passed 00:03:51.726 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.726 EAL: Restoring previous memory policy: 4 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.726 EAL: Trying to obtain current memory policy. 00:03:51.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.726 EAL: Restoring previous memory policy: 4 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.726 EAL: Trying to obtain current memory policy. 00:03:51.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.726 EAL: Restoring previous memory policy: 4 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.726 EAL: Trying to obtain current memory policy. 00:03:51.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.726 EAL: Restoring previous memory policy: 4 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.726 EAL: Trying to obtain current memory policy. 00:03:51.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.726 EAL: Restoring previous memory policy: 4 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.726 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.726 EAL: request: mp_malloc_sync 00:03:51.726 EAL: No shared files mode enabled, IPC is disabled 00:03:51.726 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.985 EAL: Trying to obtain current memory policy. 00:03:51.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.985 EAL: Restoring previous memory policy: 4 00:03:51.985 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.985 EAL: request: mp_malloc_sync 00:03:51.985 EAL: No shared files mode enabled, IPC is disabled 00:03:51.985 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.985 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.985 EAL: request: mp_malloc_sync 00:03:51.985 EAL: No shared files mode enabled, IPC is disabled 00:03:51.985 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.985 EAL: Trying to obtain current memory policy. 00:03:51.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.985 EAL: Restoring previous memory policy: 4 00:03:51.985 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.985 EAL: request: mp_malloc_sync 00:03:51.985 EAL: No shared files mode enabled, IPC is disabled 00:03:51.985 EAL: Heap on socket 0 was expanded by 130MB 00:03:52.244 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.244 EAL: request: mp_malloc_sync 00:03:52.244 EAL: No shared files mode enabled, IPC is disabled 00:03:52.244 EAL: Heap on socket 0 was shrunk by 130MB 00:03:52.502 EAL: Trying to obtain current memory policy. 00:03:52.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.502 EAL: Restoring previous memory policy: 4 00:03:52.502 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.502 EAL: request: mp_malloc_sync 00:03:52.502 EAL: No shared files mode enabled, IPC is disabled 00:03:52.502 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.070 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.070 EAL: request: mp_malloc_sync 00:03:53.070 EAL: No shared files mode enabled, IPC is disabled 00:03:53.070 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.328 EAL: Trying to obtain current memory policy. 00:03:53.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.328 EAL: Restoring previous memory policy: 4 00:03:53.328 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.328 EAL: request: mp_malloc_sync 00:03:53.328 EAL: No shared files mode enabled, IPC is disabled 00:03:53.328 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.263 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.263 EAL: request: mp_malloc_sync 00:03:54.263 EAL: No shared files mode enabled, IPC is disabled 00:03:54.263 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.200 EAL: Trying to obtain current memory policy. 00:03:55.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.547 EAL: Restoring previous memory policy: 4 00:03:55.547 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.547 EAL: request: mp_malloc_sync 00:03:55.547 EAL: No shared files mode enabled, IPC is disabled 00:03:55.547 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.936 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.195 EAL: request: mp_malloc_sync 00:03:57.195 EAL: No shared files mode enabled, IPC is disabled 00:03:57.195 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.574 passed 00:03:58.574 00:03:58.574 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.574 suites 1 1 n/a 0 0 00:03:58.574 tests 2 2 2 0 0 00:03:58.574 asserts 5684 5684 5684 0 n/a 00:03:58.574 00:03:58.574 Elapsed time = 7.419 seconds 00:03:58.574 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.574 EAL: request: mp_malloc_sync 00:03:58.574 EAL: No shared files mode enabled, IPC is disabled 00:03:58.574 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.574 EAL: No shared files mode enabled, IPC is disabled 00:03:58.574 EAL: No shared files mode enabled, IPC is disabled 00:03:58.574 EAL: No shared files mode enabled, IPC is disabled 00:03:58.574 00:03:58.574 real 0m7.749s 00:03:58.574 user 0m6.562s 00:03:58.574 sys 0m1.027s 00:03:58.574 16:56:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.574 16:56:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.574 ************************************ 00:03:58.574 END TEST env_vtophys 00:03:58.574 ************************************ 00:03:58.833 16:56:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.834 16:56:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.834 16:56:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.834 16:56:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.834 ************************************ 00:03:58.834 START TEST env_pci 00:03:58.834 ************************************ 00:03:58.834 16:56:22 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.834 00:03:58.834 00:03:58.834 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.834 http://cunit.sourceforge.net/ 00:03:58.834 00:03:58.834 00:03:58.834 Suite: pci 00:03:58.834 Test: pci_hook ...[2024-11-20 16:56:22.498069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56640 has claimed it 00:03:58.834 passed 00:03:58.834 00:03:58.834 EAL: Cannot find device (10000:00:01.0) 00:03:58.834 EAL: Failed to attach device on primary process 00:03:58.834 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.834 suites 1 1 n/a 0 0 00:03:58.834 tests 1 1 1 0 0 00:03:58.834 asserts 25 25 25 0 n/a 00:03:58.834 00:03:58.834 Elapsed time = 0.009 seconds 00:03:58.834 00:03:58.834 real 0m0.088s 00:03:58.834 user 0m0.044s 00:03:58.834 sys 0m0.042s 00:03:58.834 16:56:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.834 16:56:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.834 ************************************ 00:03:58.834 END TEST env_pci 00:03:58.834 ************************************ 00:03:58.834 16:56:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.834 16:56:22 env -- env/env.sh@15 -- # uname 00:03:58.834 16:56:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.834 16:56:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.834 16:56:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.834 16:56:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:58.834 16:56:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.834 16:56:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.834 ************************************ 00:03:58.834 START TEST env_dpdk_post_init 00:03:58.834 ************************************ 00:03:58.834 16:56:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.834 EAL: Detected CPU lcores: 10 00:03:58.834 EAL: Detected NUMA nodes: 1 00:03:58.834 EAL: Detected shared linkage of DPDK 00:03:59.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.093 EAL: Selected IOVA mode 'PA' 00:03:59.093 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.093 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:59.093 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:59.093 Starting DPDK initialization... 00:03:59.093 Starting SPDK post initialization... 00:03:59.093 SPDK NVMe probe 00:03:59.093 Attaching to 0000:00:10.0 00:03:59.093 Attaching to 0000:00:11.0 00:03:59.093 Attached to 0000:00:10.0 00:03:59.093 Attached to 0000:00:11.0 00:03:59.093 Cleaning up... 00:03:59.093 00:03:59.093 real 0m0.297s 00:03:59.093 user 0m0.098s 00:03:59.093 sys 0m0.099s 00:03:59.093 16:56:22 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.093 16:56:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.093 ************************************ 00:03:59.093 END TEST env_dpdk_post_init 00:03:59.093 ************************************ 00:03:59.093 16:56:22 env -- env/env.sh@26 -- # uname 00:03:59.093 16:56:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.093 16:56:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.093 16:56:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.093 16:56:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.093 16:56:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.093 ************************************ 00:03:59.093 START TEST env_mem_callbacks 00:03:59.093 ************************************ 00:03:59.093 16:56:22 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.358 EAL: Detected CPU lcores: 10 00:03:59.358 EAL: Detected NUMA nodes: 1 00:03:59.358 EAL: Detected shared linkage of DPDK 00:03:59.358 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.358 EAL: Selected IOVA mode 'PA' 00:03:59.358 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.358 00:03:59.358 00:03:59.358 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.358 http://cunit.sourceforge.net/ 00:03:59.358 00:03:59.358 00:03:59.358 Suite: memory 00:03:59.358 Test: test ... 00:03:59.358 register 0x200000200000 2097152 00:03:59.358 malloc 3145728 00:03:59.358 register 0x200000400000 4194304 00:03:59.358 buf 0x2000004fffc0 len 3145728 PASSED 00:03:59.358 malloc 64 00:03:59.358 buf 0x2000004ffec0 len 64 PASSED 00:03:59.358 malloc 4194304 00:03:59.358 register 0x200000800000 6291456 00:03:59.358 buf 0x2000009fffc0 len 4194304 PASSED 00:03:59.358 free 0x2000004fffc0 3145728 00:03:59.358 free 0x2000004ffec0 64 00:03:59.358 unregister 0x200000400000 4194304 PASSED 00:03:59.358 free 0x2000009fffc0 4194304 00:03:59.358 unregister 0x200000800000 6291456 PASSED 00:03:59.358 malloc 8388608 00:03:59.358 register 0x200000400000 10485760 00:03:59.358 buf 0x2000005fffc0 len 8388608 PASSED 00:03:59.358 free 0x2000005fffc0 8388608 00:03:59.358 unregister 0x200000400000 10485760 PASSED 00:03:59.358 passed 00:03:59.358 00:03:59.358 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.358 suites 1 1 n/a 0 0 00:03:59.358 tests 1 1 1 0 0 00:03:59.358 asserts 15 15 15 0 n/a 00:03:59.358 00:03:59.358 Elapsed time = 0.062 seconds 00:03:59.632 00:03:59.632 real 0m0.270s 00:03:59.632 user 0m0.100s 00:03:59.632 sys 0m0.066s 00:03:59.632 16:56:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.632 ************************************ 00:03:59.632 END TEST env_mem_callbacks 00:03:59.632 ************************************ 00:03:59.632 16:56:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.632 ************************************ 00:03:59.632 END TEST env 00:03:59.632 ************************************ 00:03:59.632 00:03:59.632 real 0m9.238s 00:03:59.632 user 0m7.346s 00:03:59.632 sys 0m1.514s 00:03:59.632 16:56:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.632 16:56:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.632 16:56:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.632 16:56:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.632 16:56:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.632 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:03:59.632 ************************************ 00:03:59.632 START TEST rpc 00:03:59.632 ************************************ 00:03:59.632 16:56:23 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.632 * Looking for test storage... 00:03:59.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.632 16:56:23 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.632 16:56:23 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.632 16:56:23 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.632 16:56:23 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.632 16:56:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.632 16:56:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.632 16:56:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.632 16:56:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.632 16:56:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.632 16:56:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.632 16:56:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.632 16:56:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.632 16:56:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.632 16:56:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.632 16:56:23 rpc -- scripts/common.sh@345 -- # : 1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.632 16:56:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.632 16:56:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.632 16:56:23 rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.632 16:56:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.891 16:56:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.891 16:56:23 rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.891 16:56:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.891 16:56:23 rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.891 16:56:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.891 16:56:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.891 16:56:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.891 16:56:23 rpc -- scripts/common.sh@368 -- # return 0 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.891 --rc genhtml_branch_coverage=1 00:03:59.891 --rc genhtml_function_coverage=1 00:03:59.891 --rc genhtml_legend=1 00:03:59.891 --rc geninfo_all_blocks=1 00:03:59.891 --rc geninfo_unexecuted_blocks=1 00:03:59.891 00:03:59.891 ' 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.891 --rc genhtml_branch_coverage=1 00:03:59.891 --rc genhtml_function_coverage=1 00:03:59.891 --rc genhtml_legend=1 00:03:59.891 --rc geninfo_all_blocks=1 00:03:59.891 --rc geninfo_unexecuted_blocks=1 00:03:59.891 00:03:59.891 ' 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.891 --rc genhtml_branch_coverage=1 00:03:59.891 --rc genhtml_function_coverage=1 00:03:59.891 --rc genhtml_legend=1 00:03:59.891 --rc geninfo_all_blocks=1 00:03:59.891 --rc geninfo_unexecuted_blocks=1 00:03:59.891 00:03:59.891 ' 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.891 --rc genhtml_branch_coverage=1 00:03:59.891 --rc genhtml_function_coverage=1 00:03:59.891 --rc genhtml_legend=1 00:03:59.891 --rc geninfo_all_blocks=1 00:03:59.891 --rc geninfo_unexecuted_blocks=1 00:03:59.891 00:03:59.891 ' 00:03:59.891 16:56:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56767 00:03:59.891 16:56:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.891 16:56:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:59.891 16:56:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56767 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 56767 ']' 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.891 16:56:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.891 [2024-11-20 16:56:23.605847] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:03:59.891 [2024-11-20 16:56:23.606260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56767 ] 00:04:00.151 [2024-11-20 16:56:23.780584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.151 [2024-11-20 16:56:23.912433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.151 [2024-11-20 16:56:23.912740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56767' to capture a snapshot of events at runtime. 00:04:00.151 [2024-11-20 16:56:23.912971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.151 [2024-11-20 16:56:23.913110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.151 [2024-11-20 16:56:23.913279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56767 for offline analysis/debug. 00:04:00.151 [2024-11-20 16:56:23.914734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.089 16:56:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.089 16:56:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:01.089 16:56:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.089 16:56:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.089 16:56:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.089 16:56:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.089 16:56:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.089 16:56:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.089 16:56:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.089 ************************************ 00:04:01.089 START TEST rpc_integrity 00:04:01.089 ************************************ 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.089 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.089 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.089 { 00:04:01.089 "name": "Malloc0", 00:04:01.089 "aliases": [ 00:04:01.089 "077cd071-ee9b-4bb9-bf58-4c14d3d79b79" 00:04:01.089 ], 00:04:01.089 "product_name": "Malloc disk", 00:04:01.089 "block_size": 512, 00:04:01.089 "num_blocks": 16384, 00:04:01.089 "uuid": "077cd071-ee9b-4bb9-bf58-4c14d3d79b79", 00:04:01.089 "assigned_rate_limits": { 00:04:01.089 "rw_ios_per_sec": 0, 00:04:01.089 "rw_mbytes_per_sec": 0, 00:04:01.089 "r_mbytes_per_sec": 0, 00:04:01.089 "w_mbytes_per_sec": 0 00:04:01.089 }, 00:04:01.090 "claimed": false, 00:04:01.090 "zoned": false, 00:04:01.090 "supported_io_types": { 00:04:01.090 "read": true, 00:04:01.090 "write": true, 00:04:01.090 "unmap": true, 00:04:01.090 "flush": true, 00:04:01.090 "reset": true, 00:04:01.090 "nvme_admin": false, 00:04:01.090 "nvme_io": false, 00:04:01.090 "nvme_io_md": false, 00:04:01.090 "write_zeroes": true, 00:04:01.090 "zcopy": true, 00:04:01.090 "get_zone_info": false, 00:04:01.090 "zone_management": false, 00:04:01.090 "zone_append": false, 00:04:01.090 "compare": false, 00:04:01.090 "compare_and_write": false, 00:04:01.090 "abort": true, 00:04:01.090 "seek_hole": false, 00:04:01.090 "seek_data": false, 00:04:01.090 "copy": true, 00:04:01.090 "nvme_iov_md": false 00:04:01.090 }, 00:04:01.090 "memory_domains": [ 00:04:01.090 { 00:04:01.090 "dma_device_id": "system", 00:04:01.090 "dma_device_type": 1 00:04:01.090 }, 00:04:01.090 { 00:04:01.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.090 "dma_device_type": 2 00:04:01.090 } 00:04:01.090 ], 00:04:01.090 "driver_specific": {} 00:04:01.090 } 00:04:01.090 ]' 00:04:01.090 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.349 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.349 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.349 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.349 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 [2024-11-20 16:56:24.969808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.349 [2024-11-20 16:56:24.969892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.349 [2024-11-20 16:56:24.969928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:01.349 [2024-11-20 16:56:24.969951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.349 [2024-11-20 16:56:24.972930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.349 [2024-11-20 16:56:24.972984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.349 Passthru0 00:04:01.349 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.349 16:56:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.349 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.349 16:56:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.349 { 00:04:01.349 "name": "Malloc0", 00:04:01.349 "aliases": [ 00:04:01.349 "077cd071-ee9b-4bb9-bf58-4c14d3d79b79" 00:04:01.349 ], 00:04:01.349 "product_name": "Malloc disk", 00:04:01.349 "block_size": 512, 00:04:01.349 "num_blocks": 16384, 00:04:01.349 "uuid": "077cd071-ee9b-4bb9-bf58-4c14d3d79b79", 00:04:01.349 "assigned_rate_limits": { 00:04:01.349 "rw_ios_per_sec": 0, 00:04:01.349 "rw_mbytes_per_sec": 0, 00:04:01.349 "r_mbytes_per_sec": 0, 00:04:01.349 "w_mbytes_per_sec": 0 00:04:01.349 }, 00:04:01.349 "claimed": true, 00:04:01.349 "claim_type": "exclusive_write", 00:04:01.349 "zoned": false, 00:04:01.349 "supported_io_types": { 00:04:01.349 "read": true, 00:04:01.349 "write": true, 00:04:01.349 "unmap": true, 00:04:01.349 "flush": true, 00:04:01.349 "reset": true, 00:04:01.349 "nvme_admin": false, 00:04:01.349 "nvme_io": false, 00:04:01.349 "nvme_io_md": false, 00:04:01.349 "write_zeroes": true, 00:04:01.349 "zcopy": true, 00:04:01.349 "get_zone_info": false, 00:04:01.349 "zone_management": false, 00:04:01.349 "zone_append": false, 00:04:01.349 "compare": false, 00:04:01.349 "compare_and_write": false, 00:04:01.349 "abort": true, 00:04:01.349 "seek_hole": false, 00:04:01.349 "seek_data": false, 00:04:01.349 "copy": true, 00:04:01.349 "nvme_iov_md": false 00:04:01.349 }, 00:04:01.349 "memory_domains": [ 00:04:01.349 { 00:04:01.349 "dma_device_id": "system", 00:04:01.349 "dma_device_type": 1 00:04:01.349 }, 00:04:01.349 { 00:04:01.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.349 "dma_device_type": 2 00:04:01.349 } 00:04:01.349 ], 00:04:01.349 "driver_specific": {} 00:04:01.349 }, 00:04:01.349 { 00:04:01.349 "name": "Passthru0", 00:04:01.349 "aliases": [ 00:04:01.349 "f58b0006-9abd-5af4-999e-2a75d95d6eff" 00:04:01.349 ], 00:04:01.349 "product_name": "passthru", 00:04:01.349 "block_size": 512, 00:04:01.349 "num_blocks": 16384, 00:04:01.349 "uuid": "f58b0006-9abd-5af4-999e-2a75d95d6eff", 00:04:01.349 "assigned_rate_limits": { 00:04:01.349 "rw_ios_per_sec": 0, 00:04:01.349 "rw_mbytes_per_sec": 0, 00:04:01.349 "r_mbytes_per_sec": 0, 00:04:01.349 "w_mbytes_per_sec": 0 00:04:01.349 }, 00:04:01.349 "claimed": false, 00:04:01.349 "zoned": false, 00:04:01.349 "supported_io_types": { 00:04:01.349 "read": true, 00:04:01.349 "write": true, 00:04:01.349 "unmap": true, 00:04:01.349 "flush": true, 00:04:01.349 "reset": true, 00:04:01.349 "nvme_admin": false, 00:04:01.349 "nvme_io": false, 00:04:01.349 "nvme_io_md": false, 00:04:01.349 "write_zeroes": true, 00:04:01.349 "zcopy": true, 00:04:01.349 "get_zone_info": false, 00:04:01.349 "zone_management": false, 00:04:01.349 "zone_append": false, 00:04:01.349 "compare": false, 00:04:01.349 "compare_and_write": false, 00:04:01.349 "abort": true, 00:04:01.349 "seek_hole": false, 00:04:01.349 "seek_data": false, 00:04:01.349 "copy": true, 00:04:01.349 "nvme_iov_md": false 00:04:01.349 }, 00:04:01.349 "memory_domains": [ 00:04:01.349 { 00:04:01.349 "dma_device_id": "system", 00:04:01.349 "dma_device_type": 1 00:04:01.349 }, 00:04:01.349 { 00:04:01.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.349 "dma_device_type": 2 00:04:01.349 } 00:04:01.349 ], 00:04:01.349 "driver_specific": { 00:04:01.349 "passthru": { 00:04:01.349 "name": "Passthru0", 00:04:01.349 "base_bdev_name": "Malloc0" 00:04:01.349 } 00:04:01.349 } 00:04:01.349 } 00:04:01.349 ]' 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.349 ************************************ 00:04:01.349 END TEST rpc_integrity 00:04:01.349 ************************************ 00:04:01.349 16:56:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.349 00:04:01.349 real 0m0.356s 00:04:01.349 user 0m0.215s 00:04:01.349 sys 0m0.039s 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.349 16:56:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.349 16:56:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:01.350 16:56:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.350 16:56:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.350 16:56:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.350 ************************************ 00:04:01.350 START TEST rpc_plugins 00:04:01.350 ************************************ 00:04:01.350 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:01.350 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:01.350 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.350 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.608 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:01.609 { 00:04:01.609 "name": "Malloc1", 00:04:01.609 "aliases": [ 00:04:01.609 "38376f60-f433-42cd-9343-89ac1b223ff3" 00:04:01.609 ], 00:04:01.609 "product_name": "Malloc disk", 00:04:01.609 "block_size": 4096, 00:04:01.609 "num_blocks": 256, 00:04:01.609 "uuid": "38376f60-f433-42cd-9343-89ac1b223ff3", 00:04:01.609 "assigned_rate_limits": { 00:04:01.609 "rw_ios_per_sec": 0, 00:04:01.609 "rw_mbytes_per_sec": 0, 00:04:01.609 "r_mbytes_per_sec": 0, 00:04:01.609 "w_mbytes_per_sec": 0 00:04:01.609 }, 00:04:01.609 "claimed": false, 00:04:01.609 "zoned": false, 00:04:01.609 "supported_io_types": { 00:04:01.609 "read": true, 00:04:01.609 "write": true, 00:04:01.609 "unmap": true, 00:04:01.609 "flush": true, 00:04:01.609 "reset": true, 00:04:01.609 "nvme_admin": false, 00:04:01.609 "nvme_io": false, 00:04:01.609 "nvme_io_md": false, 00:04:01.609 "write_zeroes": true, 00:04:01.609 "zcopy": true, 00:04:01.609 "get_zone_info": false, 00:04:01.609 "zone_management": false, 00:04:01.609 "zone_append": false, 00:04:01.609 "compare": false, 00:04:01.609 "compare_and_write": false, 00:04:01.609 "abort": true, 00:04:01.609 "seek_hole": false, 00:04:01.609 "seek_data": false, 00:04:01.609 "copy": true, 00:04:01.609 "nvme_iov_md": false 00:04:01.609 }, 00:04:01.609 "memory_domains": [ 00:04:01.609 { 00:04:01.609 "dma_device_id": "system", 00:04:01.609 "dma_device_type": 1 00:04:01.609 }, 00:04:01.609 { 00:04:01.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.609 "dma_device_type": 2 00:04:01.609 } 00:04:01.609 ], 00:04:01.609 "driver_specific": {} 00:04:01.609 } 00:04:01.609 ]' 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.609 ************************************ 00:04:01.609 END TEST rpc_plugins 00:04:01.609 ************************************ 00:04:01.609 16:56:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.609 00:04:01.609 real 0m0.167s 00:04:01.609 user 0m0.107s 00:04:01.609 sys 0m0.016s 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.609 16:56:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 16:56:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.609 16:56:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.609 16:56:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.609 16:56:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 ************************************ 00:04:01.609 START TEST rpc_trace_cmd_test 00:04:01.609 ************************************ 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.609 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56767", 00:04:01.609 "tpoint_group_mask": "0x8", 00:04:01.609 "iscsi_conn": { 00:04:01.609 "mask": "0x2", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "scsi": { 00:04:01.609 "mask": "0x4", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "bdev": { 00:04:01.609 "mask": "0x8", 00:04:01.609 "tpoint_mask": "0xffffffffffffffff" 00:04:01.609 }, 00:04:01.609 "nvmf_rdma": { 00:04:01.609 "mask": "0x10", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "nvmf_tcp": { 00:04:01.609 "mask": "0x20", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "ftl": { 00:04:01.609 "mask": "0x40", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "blobfs": { 00:04:01.609 "mask": "0x80", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "dsa": { 00:04:01.609 "mask": "0x200", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "thread": { 00:04:01.609 "mask": "0x400", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "nvme_pcie": { 00:04:01.609 "mask": "0x800", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "iaa": { 00:04:01.609 "mask": "0x1000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "nvme_tcp": { 00:04:01.609 "mask": "0x2000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "bdev_nvme": { 00:04:01.609 "mask": "0x4000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "sock": { 00:04:01.609 "mask": "0x8000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "blob": { 00:04:01.609 "mask": "0x10000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "bdev_raid": { 00:04:01.609 "mask": "0x20000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 }, 00:04:01.609 "scheduler": { 00:04:01.609 "mask": "0x40000", 00:04:01.609 "tpoint_mask": "0x0" 00:04:01.609 } 00:04:01.609 }' 00:04:01.609 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:01.868 ************************************ 00:04:01.868 END TEST rpc_trace_cmd_test 00:04:01.868 ************************************ 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:01.868 00:04:01.868 real 0m0.273s 00:04:01.868 user 0m0.232s 00:04:01.868 sys 0m0.033s 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.868 16:56:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:02.127 16:56:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:02.127 16:56:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:02.128 16:56:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:02.128 16:56:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.128 16:56:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.128 16:56:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 ************************************ 00:04:02.128 START TEST rpc_daemon_integrity 00:04:02.128 ************************************ 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.128 { 00:04:02.128 "name": "Malloc2", 00:04:02.128 "aliases": [ 00:04:02.128 "b27c828d-645b-476e-b518-55d523ad43dc" 00:04:02.128 ], 00:04:02.128 "product_name": "Malloc disk", 00:04:02.128 "block_size": 512, 00:04:02.128 "num_blocks": 16384, 00:04:02.128 "uuid": "b27c828d-645b-476e-b518-55d523ad43dc", 00:04:02.128 "assigned_rate_limits": { 00:04:02.128 "rw_ios_per_sec": 0, 00:04:02.128 "rw_mbytes_per_sec": 0, 00:04:02.128 "r_mbytes_per_sec": 0, 00:04:02.128 "w_mbytes_per_sec": 0 00:04:02.128 }, 00:04:02.128 "claimed": false, 00:04:02.128 "zoned": false, 00:04:02.128 "supported_io_types": { 00:04:02.128 "read": true, 00:04:02.128 "write": true, 00:04:02.128 "unmap": true, 00:04:02.128 "flush": true, 00:04:02.128 "reset": true, 00:04:02.128 "nvme_admin": false, 00:04:02.128 "nvme_io": false, 00:04:02.128 "nvme_io_md": false, 00:04:02.128 "write_zeroes": true, 00:04:02.128 "zcopy": true, 00:04:02.128 "get_zone_info": false, 00:04:02.128 "zone_management": false, 00:04:02.128 "zone_append": false, 00:04:02.128 "compare": false, 00:04:02.128 "compare_and_write": false, 00:04:02.128 "abort": true, 00:04:02.128 "seek_hole": false, 00:04:02.128 "seek_data": false, 00:04:02.128 "copy": true, 00:04:02.128 "nvme_iov_md": false 00:04:02.128 }, 00:04:02.128 "memory_domains": [ 00:04:02.128 { 00:04:02.128 "dma_device_id": "system", 00:04:02.128 "dma_device_type": 1 00:04:02.128 }, 00:04:02.128 { 00:04:02.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.128 "dma_device_type": 2 00:04:02.128 } 00:04:02.128 ], 00:04:02.128 "driver_specific": {} 00:04:02.128 } 00:04:02.128 ]' 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 [2024-11-20 16:56:25.909091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.128 [2024-11-20 16:56:25.909310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.128 [2024-11-20 16:56:25.909356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:02.128 [2024-11-20 16:56:25.909376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.128 [2024-11-20 16:56:25.912309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.128 [2024-11-20 16:56:25.912474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.128 Passthru0 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.128 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.128 { 00:04:02.128 "name": "Malloc2", 00:04:02.128 "aliases": [ 00:04:02.128 "b27c828d-645b-476e-b518-55d523ad43dc" 00:04:02.128 ], 00:04:02.128 "product_name": "Malloc disk", 00:04:02.128 "block_size": 512, 00:04:02.128 "num_blocks": 16384, 00:04:02.128 "uuid": "b27c828d-645b-476e-b518-55d523ad43dc", 00:04:02.128 "assigned_rate_limits": { 00:04:02.128 "rw_ios_per_sec": 0, 00:04:02.128 "rw_mbytes_per_sec": 0, 00:04:02.128 "r_mbytes_per_sec": 0, 00:04:02.128 "w_mbytes_per_sec": 0 00:04:02.128 }, 00:04:02.128 "claimed": true, 00:04:02.128 "claim_type": "exclusive_write", 00:04:02.128 "zoned": false, 00:04:02.128 "supported_io_types": { 00:04:02.128 "read": true, 00:04:02.128 "write": true, 00:04:02.128 "unmap": true, 00:04:02.128 "flush": true, 00:04:02.128 "reset": true, 00:04:02.128 "nvme_admin": false, 00:04:02.128 "nvme_io": false, 00:04:02.128 "nvme_io_md": false, 00:04:02.128 "write_zeroes": true, 00:04:02.128 "zcopy": true, 00:04:02.128 "get_zone_info": false, 00:04:02.128 "zone_management": false, 00:04:02.128 "zone_append": false, 00:04:02.128 "compare": false, 00:04:02.128 "compare_and_write": false, 00:04:02.128 "abort": true, 00:04:02.128 "seek_hole": false, 00:04:02.128 "seek_data": false, 00:04:02.128 "copy": true, 00:04:02.128 "nvme_iov_md": false 00:04:02.128 }, 00:04:02.128 "memory_domains": [ 00:04:02.128 { 00:04:02.128 "dma_device_id": "system", 00:04:02.128 "dma_device_type": 1 00:04:02.128 }, 00:04:02.128 { 00:04:02.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.128 "dma_device_type": 2 00:04:02.128 } 00:04:02.128 ], 00:04:02.128 "driver_specific": {} 00:04:02.128 }, 00:04:02.128 { 00:04:02.128 "name": "Passthru0", 00:04:02.128 "aliases": [ 00:04:02.128 "9cc01f03-8d17-5d88-a7e7-4e1cc320dd67" 00:04:02.128 ], 00:04:02.128 "product_name": "passthru", 00:04:02.128 "block_size": 512, 00:04:02.128 "num_blocks": 16384, 00:04:02.128 "uuid": "9cc01f03-8d17-5d88-a7e7-4e1cc320dd67", 00:04:02.128 "assigned_rate_limits": { 00:04:02.128 "rw_ios_per_sec": 0, 00:04:02.128 "rw_mbytes_per_sec": 0, 00:04:02.128 "r_mbytes_per_sec": 0, 00:04:02.128 "w_mbytes_per_sec": 0 00:04:02.129 }, 00:04:02.129 "claimed": false, 00:04:02.129 "zoned": false, 00:04:02.129 "supported_io_types": { 00:04:02.129 "read": true, 00:04:02.129 "write": true, 00:04:02.129 "unmap": true, 00:04:02.129 "flush": true, 00:04:02.129 "reset": true, 00:04:02.129 "nvme_admin": false, 00:04:02.129 "nvme_io": false, 00:04:02.129 "nvme_io_md": false, 00:04:02.129 "write_zeroes": true, 00:04:02.129 "zcopy": true, 00:04:02.129 "get_zone_info": false, 00:04:02.129 "zone_management": false, 00:04:02.129 "zone_append": false, 00:04:02.129 "compare": false, 00:04:02.129 "compare_and_write": false, 00:04:02.129 "abort": true, 00:04:02.129 "seek_hole": false, 00:04:02.129 "seek_data": false, 00:04:02.129 "copy": true, 00:04:02.129 "nvme_iov_md": false 00:04:02.129 }, 00:04:02.129 "memory_domains": [ 00:04:02.129 { 00:04:02.129 "dma_device_id": "system", 00:04:02.129 "dma_device_type": 1 00:04:02.129 }, 00:04:02.129 { 00:04:02.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.129 "dma_device_type": 2 00:04:02.129 } 00:04:02.129 ], 00:04:02.129 "driver_specific": { 00:04:02.129 "passthru": { 00:04:02.129 "name": "Passthru0", 00:04:02.129 "base_bdev_name": "Malloc2" 00:04:02.129 } 00:04:02.129 } 00:04:02.129 } 00:04:02.129 ]' 00:04:02.129 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.388 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.388 16:56:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.388 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.388 16:56:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.388 ************************************ 00:04:02.388 END TEST rpc_daemon_integrity 00:04:02.388 ************************************ 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.388 00:04:02.388 real 0m0.353s 00:04:02.388 user 0m0.223s 00:04:02.388 sys 0m0.034s 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.388 16:56:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.388 16:56:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:02.388 16:56:26 rpc -- rpc/rpc.sh@84 -- # killprocess 56767 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 56767 ']' 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@958 -- # kill -0 56767 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@959 -- # uname 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56767 00:04:02.388 killing process with pid 56767 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56767' 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@973 -- # kill 56767 00:04:02.388 16:56:26 rpc -- common/autotest_common.sh@978 -- # wait 56767 00:04:04.919 00:04:04.919 real 0m4.956s 00:04:04.919 user 0m5.679s 00:04:04.919 sys 0m0.881s 00:04:04.919 16:56:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.919 ************************************ 00:04:04.919 END TEST rpc 00:04:04.919 ************************************ 00:04:04.919 16:56:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.919 16:56:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.919 16:56:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.919 16:56:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.919 16:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:04.919 ************************************ 00:04:04.919 START TEST skip_rpc 00:04:04.919 ************************************ 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.919 * Looking for test storage... 00:04:04.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.919 16:56:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.919 16:56:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.919 --rc genhtml_branch_coverage=1 00:04:04.919 --rc genhtml_function_coverage=1 00:04:04.919 --rc genhtml_legend=1 00:04:04.919 --rc geninfo_all_blocks=1 00:04:04.919 --rc geninfo_unexecuted_blocks=1 00:04:04.919 00:04:04.919 ' 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.920 --rc genhtml_branch_coverage=1 00:04:04.920 --rc genhtml_function_coverage=1 00:04:04.920 --rc genhtml_legend=1 00:04:04.920 --rc geninfo_all_blocks=1 00:04:04.920 --rc geninfo_unexecuted_blocks=1 00:04:04.920 00:04:04.920 ' 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.920 --rc genhtml_branch_coverage=1 00:04:04.920 --rc genhtml_function_coverage=1 00:04:04.920 --rc genhtml_legend=1 00:04:04.920 --rc geninfo_all_blocks=1 00:04:04.920 --rc geninfo_unexecuted_blocks=1 00:04:04.920 00:04:04.920 ' 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.920 --rc genhtml_branch_coverage=1 00:04:04.920 --rc genhtml_function_coverage=1 00:04:04.920 --rc genhtml_legend=1 00:04:04.920 --rc geninfo_all_blocks=1 00:04:04.920 --rc geninfo_unexecuted_blocks=1 00:04:04.920 00:04:04.920 ' 00:04:04.920 16:56:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.920 16:56:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.920 16:56:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.920 16:56:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.920 ************************************ 00:04:04.920 START TEST skip_rpc 00:04:04.920 ************************************ 00:04:04.920 16:56:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:04.920 16:56:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56996 00:04:04.920 16:56:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.920 16:56:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.920 16:56:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.920 [2024-11-20 16:56:28.643949] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:04.920 [2024-11-20 16:56:28.644342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56996 ] 00:04:05.179 [2024-11-20 16:56:28.826605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.179 [2024-11-20 16:56:28.949573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56996 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56996 ']' 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56996 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56996 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56996' 00:04:10.451 killing process with pid 56996 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56996 00:04:10.451 16:56:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56996 00:04:11.828 00:04:11.828 real 0m7.005s 00:04:11.828 user 0m6.456s 00:04:11.828 sys 0m0.447s 00:04:11.828 16:56:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.828 ************************************ 00:04:11.828 END TEST skip_rpc 00:04:11.828 ************************************ 00:04:11.828 16:56:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.828 16:56:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.828 16:56:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.828 16:56:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.828 16:56:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.828 ************************************ 00:04:11.828 START TEST skip_rpc_with_json 00:04:11.828 ************************************ 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57099 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57099 00:04:11.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57099 ']' 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.828 16:56:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.087 [2024-11-20 16:56:35.698824] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:12.087 [2024-11-20 16:56:35.699257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57099 ] 00:04:12.087 [2024-11-20 16:56:35.877392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.345 [2024-11-20 16:56:35.986747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.914 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.914 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:12.914 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.914 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.914 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.174 [2024-11-20 16:56:36.783351] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:13.174 request: 00:04:13.174 { 00:04:13.174 "trtype": "tcp", 00:04:13.174 "method": "nvmf_get_transports", 00:04:13.174 "req_id": 1 00:04:13.174 } 00:04:13.174 Got JSON-RPC error response 00:04:13.174 response: 00:04:13.174 { 00:04:13.174 "code": -19, 00:04:13.174 "message": "No such device" 00:04:13.174 } 00:04:13.174 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.175 [2024-11-20 16:56:36.795462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.175 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:13.175 { 00:04:13.175 "subsystems": [ 00:04:13.175 { 00:04:13.175 "subsystem": "fsdev", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "fsdev_set_opts", 00:04:13.175 "params": { 00:04:13.175 "fsdev_io_pool_size": 65535, 00:04:13.175 "fsdev_io_cache_size": 256 00:04:13.175 } 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "keyring", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "iobuf", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "iobuf_set_options", 00:04:13.175 "params": { 00:04:13.175 "small_pool_count": 8192, 00:04:13.175 "large_pool_count": 1024, 00:04:13.175 "small_bufsize": 8192, 00:04:13.175 "large_bufsize": 135168, 00:04:13.175 "enable_numa": false 00:04:13.175 } 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "sock", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "sock_set_default_impl", 00:04:13.175 "params": { 00:04:13.175 "impl_name": "posix" 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "sock_impl_set_options", 00:04:13.175 "params": { 00:04:13.175 "impl_name": "ssl", 00:04:13.175 "recv_buf_size": 4096, 00:04:13.175 "send_buf_size": 4096, 00:04:13.175 "enable_recv_pipe": true, 00:04:13.175 "enable_quickack": false, 00:04:13.175 "enable_placement_id": 0, 00:04:13.175 "enable_zerocopy_send_server": true, 00:04:13.175 "enable_zerocopy_send_client": false, 00:04:13.175 "zerocopy_threshold": 0, 00:04:13.175 "tls_version": 0, 00:04:13.175 "enable_ktls": false 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "sock_impl_set_options", 00:04:13.175 "params": { 00:04:13.175 "impl_name": "posix", 00:04:13.175 "recv_buf_size": 2097152, 00:04:13.175 "send_buf_size": 2097152, 00:04:13.175 "enable_recv_pipe": true, 00:04:13.175 "enable_quickack": false, 00:04:13.175 "enable_placement_id": 0, 00:04:13.175 "enable_zerocopy_send_server": true, 00:04:13.175 "enable_zerocopy_send_client": false, 00:04:13.175 "zerocopy_threshold": 0, 00:04:13.175 "tls_version": 0, 00:04:13.175 "enable_ktls": false 00:04:13.175 } 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "vmd", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "accel", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "accel_set_options", 00:04:13.175 "params": { 00:04:13.175 "small_cache_size": 128, 00:04:13.175 "large_cache_size": 16, 00:04:13.175 "task_count": 2048, 00:04:13.175 "sequence_count": 2048, 00:04:13.175 "buf_count": 2048 00:04:13.175 } 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "bdev", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "bdev_set_options", 00:04:13.175 "params": { 00:04:13.175 "bdev_io_pool_size": 65535, 00:04:13.175 "bdev_io_cache_size": 256, 00:04:13.175 "bdev_auto_examine": true, 00:04:13.175 "iobuf_small_cache_size": 128, 00:04:13.175 "iobuf_large_cache_size": 16 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "bdev_raid_set_options", 00:04:13.175 "params": { 00:04:13.175 "process_window_size_kb": 1024, 00:04:13.175 "process_max_bandwidth_mb_sec": 0 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "bdev_iscsi_set_options", 00:04:13.175 "params": { 00:04:13.175 "timeout_sec": 30 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "bdev_nvme_set_options", 00:04:13.175 "params": { 00:04:13.175 "action_on_timeout": "none", 00:04:13.175 "timeout_us": 0, 00:04:13.175 "timeout_admin_us": 0, 00:04:13.175 "keep_alive_timeout_ms": 10000, 00:04:13.175 "arbitration_burst": 0, 00:04:13.175 "low_priority_weight": 0, 00:04:13.175 "medium_priority_weight": 0, 00:04:13.175 "high_priority_weight": 0, 00:04:13.175 "nvme_adminq_poll_period_us": 10000, 00:04:13.175 "nvme_ioq_poll_period_us": 0, 00:04:13.175 "io_queue_requests": 0, 00:04:13.175 "delay_cmd_submit": true, 00:04:13.175 "transport_retry_count": 4, 00:04:13.175 "bdev_retry_count": 3, 00:04:13.175 "transport_ack_timeout": 0, 00:04:13.175 "ctrlr_loss_timeout_sec": 0, 00:04:13.175 "reconnect_delay_sec": 0, 00:04:13.175 "fast_io_fail_timeout_sec": 0, 00:04:13.175 "disable_auto_failback": false, 00:04:13.175 "generate_uuids": false, 00:04:13.175 "transport_tos": 0, 00:04:13.175 "nvme_error_stat": false, 00:04:13.175 "rdma_srq_size": 0, 00:04:13.175 "io_path_stat": false, 00:04:13.175 "allow_accel_sequence": false, 00:04:13.175 "rdma_max_cq_size": 0, 00:04:13.175 "rdma_cm_event_timeout_ms": 0, 00:04:13.175 "dhchap_digests": [ 00:04:13.175 "sha256", 00:04:13.175 "sha384", 00:04:13.175 "sha512" 00:04:13.175 ], 00:04:13.175 "dhchap_dhgroups": [ 00:04:13.175 "null", 00:04:13.175 "ffdhe2048", 00:04:13.175 "ffdhe3072", 00:04:13.175 "ffdhe4096", 00:04:13.175 "ffdhe6144", 00:04:13.175 "ffdhe8192" 00:04:13.175 ] 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "bdev_nvme_set_hotplug", 00:04:13.175 "params": { 00:04:13.175 "period_us": 100000, 00:04:13.175 "enable": false 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "bdev_wait_for_examine" 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "scsi", 00:04:13.175 "config": null 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "scheduler", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "framework_set_scheduler", 00:04:13.175 "params": { 00:04:13.175 "name": "static" 00:04:13.175 } 00:04:13.175 } 00:04:13.175 ] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "vhost_scsi", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "vhost_blk", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "ublk", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "nbd", 00:04:13.175 "config": [] 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "subsystem": "nvmf", 00:04:13.175 "config": [ 00:04:13.175 { 00:04:13.175 "method": "nvmf_set_config", 00:04:13.175 "params": { 00:04:13.175 "discovery_filter": "match_any", 00:04:13.175 "admin_cmd_passthru": { 00:04:13.175 "identify_ctrlr": false 00:04:13.175 }, 00:04:13.175 "dhchap_digests": [ 00:04:13.175 "sha256", 00:04:13.175 "sha384", 00:04:13.175 "sha512" 00:04:13.175 ], 00:04:13.175 "dhchap_dhgroups": [ 00:04:13.175 "null", 00:04:13.175 "ffdhe2048", 00:04:13.175 "ffdhe3072", 00:04:13.175 "ffdhe4096", 00:04:13.175 "ffdhe6144", 00:04:13.175 "ffdhe8192" 00:04:13.175 ] 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "nvmf_set_max_subsystems", 00:04:13.175 "params": { 00:04:13.175 "max_subsystems": 1024 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "nvmf_set_crdt", 00:04:13.175 "params": { 00:04:13.175 "crdt1": 0, 00:04:13.175 "crdt2": 0, 00:04:13.175 "crdt3": 0 00:04:13.175 } 00:04:13.175 }, 00:04:13.175 { 00:04:13.175 "method": "nvmf_create_transport", 00:04:13.175 "params": { 00:04:13.175 "trtype": "TCP", 00:04:13.175 "max_queue_depth": 128, 00:04:13.175 "max_io_qpairs_per_ctrlr": 127, 00:04:13.175 "in_capsule_data_size": 4096, 00:04:13.175 "max_io_size": 131072, 00:04:13.175 "io_unit_size": 131072, 00:04:13.175 "max_aq_depth": 128, 00:04:13.175 "num_shared_buffers": 511, 00:04:13.175 "buf_cache_size": 4294967295, 00:04:13.175 "dif_insert_or_strip": false, 00:04:13.176 "zcopy": false, 00:04:13.176 "c2h_success": true, 00:04:13.176 "sock_priority": 0, 00:04:13.176 "abort_timeout_sec": 1, 00:04:13.176 "ack_timeout": 0, 00:04:13.176 "data_wr_pool_size": 0 00:04:13.176 } 00:04:13.176 } 00:04:13.176 ] 00:04:13.176 }, 00:04:13.176 { 00:04:13.176 "subsystem": "iscsi", 00:04:13.176 "config": [ 00:04:13.176 { 00:04:13.176 "method": "iscsi_set_options", 00:04:13.176 "params": { 00:04:13.176 "node_base": "iqn.2016-06.io.spdk", 00:04:13.176 "max_sessions": 128, 00:04:13.176 "max_connections_per_session": 2, 00:04:13.176 "max_queue_depth": 64, 00:04:13.176 "default_time2wait": 2, 00:04:13.176 "default_time2retain": 20, 00:04:13.176 "first_burst_length": 8192, 00:04:13.176 "immediate_data": true, 00:04:13.176 "allow_duplicated_isid": false, 00:04:13.176 "error_recovery_level": 0, 00:04:13.176 "nop_timeout": 60, 00:04:13.176 "nop_in_interval": 30, 00:04:13.176 "disable_chap": false, 00:04:13.176 "require_chap": false, 00:04:13.176 "mutual_chap": false, 00:04:13.176 "chap_group": 0, 00:04:13.176 "max_large_datain_per_connection": 64, 00:04:13.176 "max_r2t_per_connection": 4, 00:04:13.176 "pdu_pool_size": 36864, 00:04:13.176 "immediate_data_pool_size": 16384, 00:04:13.176 "data_out_pool_size": 2048 00:04:13.176 } 00:04:13.176 } 00:04:13.176 ] 00:04:13.176 } 00:04:13.176 ] 00:04:13.176 } 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57099 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57099 ']' 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57099 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.176 16:56:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57099 00:04:13.176 killing process with pid 57099 00:04:13.176 16:56:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.176 16:56:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.176 16:56:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57099' 00:04:13.176 16:56:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57099 00:04:13.176 16:56:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57099 00:04:15.711 16:56:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57145 00:04:15.711 16:56:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.711 16:56:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57145 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57145 ']' 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57145 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57145 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57145' 00:04:21.014 killing process with pid 57145 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57145 00:04:21.014 16:56:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57145 00:04:22.391 16:56:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.391 16:56:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.391 ************************************ 00:04:22.391 END TEST skip_rpc_with_json 00:04:22.391 ************************************ 00:04:22.391 00:04:22.391 real 0m10.401s 00:04:22.391 user 0m9.821s 00:04:22.391 sys 0m0.959s 00:04:22.391 16:56:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.391 16:56:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.391 16:56:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.391 16:56:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.391 16:56:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.391 16:56:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.391 ************************************ 00:04:22.391 START TEST skip_rpc_with_delay 00:04:22.391 ************************************ 00:04:22.391 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:22.391 16:56:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.392 [2024-11-20 16:56:46.145684] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.392 ************************************ 00:04:22.392 END TEST skip_rpc_with_delay 00:04:22.392 ************************************ 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.392 00:04:22.392 real 0m0.194s 00:04:22.392 user 0m0.107s 00:04:22.392 sys 0m0.084s 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.392 16:56:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.392 16:56:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.392 16:56:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.392 16:56:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.392 16:56:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.392 16:56:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.392 16:56:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.651 ************************************ 00:04:22.651 START TEST exit_on_failed_rpc_init 00:04:22.651 ************************************ 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:22.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57279 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57279 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57279 ']' 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.651 16:56:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.651 [2024-11-20 16:56:46.430614] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:22.651 [2024-11-20 16:56:46.430826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57279 ] 00:04:22.909 [2024-11-20 16:56:46.611837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.909 [2024-11-20 16:56:46.714119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:23.845 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.846 16:56:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.846 [2024-11-20 16:56:47.619469] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:23.846 [2024-11-20 16:56:47.619669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57297 ] 00:04:24.104 [2024-11-20 16:56:47.798385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.105 [2024-11-20 16:56:47.908566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.105 [2024-11-20 16:56:47.909017] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.105 [2024-11-20 16:56:47.909049] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.105 [2024-11-20 16:56:47.909066] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57279 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57279 ']' 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57279 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57279 00:04:24.364 killing process with pid 57279 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57279' 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57279 00:04:24.364 16:56:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57279 00:04:26.898 00:04:26.898 real 0m3.871s 00:04:26.898 user 0m4.252s 00:04:26.898 sys 0m0.652s 00:04:26.898 16:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.898 ************************************ 00:04:26.898 END TEST exit_on_failed_rpc_init 00:04:26.898 ************************************ 00:04:26.898 16:56:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.898 16:56:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.898 ************************************ 00:04:26.898 END TEST skip_rpc 00:04:26.898 ************************************ 00:04:26.898 00:04:26.898 real 0m21.860s 00:04:26.898 user 0m20.811s 00:04:26.898 sys 0m2.352s 00:04:26.898 16:56:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.898 16:56:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.898 16:56:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.898 16:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.898 16:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.898 16:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:26.898 ************************************ 00:04:26.898 START TEST rpc_client 00:04:26.898 ************************************ 00:04:26.898 16:56:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.898 * Looking for test storage... 00:04:26.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.899 16:56:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:26.899 OK 00:04:26.899 16:56:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.899 00:04:26.899 real 0m0.262s 00:04:26.899 user 0m0.154s 00:04:26.899 sys 0m0.115s 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.899 16:56:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:26.899 ************************************ 00:04:26.899 END TEST rpc_client 00:04:26.899 ************************************ 00:04:26.899 16:56:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.899 16:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.899 16:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.899 16:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:26.899 ************************************ 00:04:26.899 START TEST json_config 00:04:26.899 ************************************ 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.899 16:56:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.899 16:56:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.899 16:56:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.899 16:56:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.899 16:56:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.899 16:56:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:26.899 16:56:50 json_config -- scripts/common.sh@345 -- # : 1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.899 16:56:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.899 16:56:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@353 -- # local d=1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.899 16:56:50 json_config -- scripts/common.sh@355 -- # echo 1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.899 16:56:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@353 -- # local d=2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.899 16:56:50 json_config -- scripts/common.sh@355 -- # echo 2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.899 16:56:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.899 16:56:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.899 16:56:50 json_config -- scripts/common.sh@368 -- # return 0 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.899 --rc genhtml_branch_coverage=1 00:04:26.899 --rc genhtml_function_coverage=1 00:04:26.899 --rc genhtml_legend=1 00:04:26.899 --rc geninfo_all_blocks=1 00:04:26.899 --rc geninfo_unexecuted_blocks=1 00:04:26.899 00:04:26.899 ' 00:04:26.899 16:56:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3649dedb-1a77-4be6-960e-e1a7d201f91a 00:04:26.899 16:56:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3649dedb-1a77-4be6-960e-e1a7d201f91a 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.900 16:56:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.900 16:56:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.900 16:56:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.900 16:56:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.900 16:56:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.900 16:56:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.900 16:56:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.900 16:56:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.900 16:56:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@51 -- # : 0 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.900 16:56:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.900 WARNING: No tests are enabled so not running JSON configuration tests 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:26.900 16:56:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:26.900 ************************************ 00:04:26.900 END TEST json_config 00:04:26.900 ************************************ 00:04:26.900 00:04:26.900 real 0m0.193s 00:04:26.900 user 0m0.128s 00:04:26.900 sys 0m0.067s 00:04:26.900 16:56:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.900 16:56:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.160 16:56:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.160 16:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.160 16:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.160 16:56:50 -- common/autotest_common.sh@10 -- # set +x 00:04:27.160 ************************************ 00:04:27.160 START TEST json_config_extra_key 00:04:27.160 ************************************ 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.160 16:56:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.160 --rc genhtml_branch_coverage=1 00:04:27.160 --rc genhtml_function_coverage=1 00:04:27.160 --rc genhtml_legend=1 00:04:27.160 --rc geninfo_all_blocks=1 00:04:27.160 --rc geninfo_unexecuted_blocks=1 00:04:27.160 00:04:27.160 ' 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.160 --rc genhtml_branch_coverage=1 00:04:27.160 --rc genhtml_function_coverage=1 00:04:27.160 --rc genhtml_legend=1 00:04:27.160 --rc geninfo_all_blocks=1 00:04:27.160 --rc geninfo_unexecuted_blocks=1 00:04:27.160 00:04:27.160 ' 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.160 --rc genhtml_branch_coverage=1 00:04:27.160 --rc genhtml_function_coverage=1 00:04:27.160 --rc genhtml_legend=1 00:04:27.160 --rc geninfo_all_blocks=1 00:04:27.160 --rc geninfo_unexecuted_blocks=1 00:04:27.160 00:04:27.160 ' 00:04:27.160 16:56:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.160 --rc genhtml_branch_coverage=1 00:04:27.160 --rc genhtml_function_coverage=1 00:04:27.160 --rc genhtml_legend=1 00:04:27.160 --rc geninfo_all_blocks=1 00:04:27.160 --rc geninfo_unexecuted_blocks=1 00:04:27.160 00:04:27.160 ' 00:04:27.160 16:56:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.160 16:56:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3649dedb-1a77-4be6-960e-e1a7d201f91a 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3649dedb-1a77-4be6-960e-e1a7d201f91a 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.160 16:56:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.160 16:56:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.160 16:56:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.160 16:56:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.160 16:56:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.161 16:56:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.161 16:56:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.161 16:56:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.161 16:56:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.161 16:56:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.161 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.161 16:56:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.161 INFO: launching applications... 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.161 16:56:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57496 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.161 Waiting for target to run... 00:04:27.161 16:56:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57496 /var/tmp/spdk_tgt.sock 00:04:27.161 16:56:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57496 ']' 00:04:27.161 16:56:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.420 16:56:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.420 16:56:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.420 16:56:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.420 16:56:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.420 [2024-11-20 16:56:51.149845] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:27.420 [2024-11-20 16:56:51.150212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57496 ] 00:04:27.988 [2024-11-20 16:56:51.637096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.988 [2024-11-20 16:56:51.744100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.556 16:56:52 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.556 00:04:28.556 INFO: shutting down applications... 00:04:28.556 16:56:52 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.556 16:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.556 16:56:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57496 ]] 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57496 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:28.556 16:56:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.124 16:56:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.124 16:56:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.124 16:56:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:29.124 16:56:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.731 16:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.731 16:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.731 16:56:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:29.731 16:56:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.299 16:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.299 16:56:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.299 16:56:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:30.299 16:56:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.558 16:56:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.558 16:56:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.558 16:56:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:30.558 16:56:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57496 00:04:31.127 SPDK target shutdown done 00:04:31.127 Success 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.127 16:56:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.127 16:56:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:31.127 00:04:31.127 real 0m4.113s 00:04:31.127 user 0m3.714s 00:04:31.127 sys 0m0.691s 00:04:31.127 ************************************ 00:04:31.127 END TEST json_config_extra_key 00:04:31.127 ************************************ 00:04:31.127 16:56:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.127 16:56:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:31.127 16:56:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.127 16:56:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.127 16:56:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.127 16:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.127 ************************************ 00:04:31.127 START TEST alias_rpc 00:04:31.127 ************************************ 00:04:31.127 16:56:54 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.387 * Looking for test storage... 00:04:31.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.387 16:56:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.387 --rc genhtml_branch_coverage=1 00:04:31.387 --rc genhtml_function_coverage=1 00:04:31.387 --rc genhtml_legend=1 00:04:31.387 --rc geninfo_all_blocks=1 00:04:31.387 --rc geninfo_unexecuted_blocks=1 00:04:31.387 00:04:31.387 ' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.387 --rc genhtml_branch_coverage=1 00:04:31.387 --rc genhtml_function_coverage=1 00:04:31.387 --rc genhtml_legend=1 00:04:31.387 --rc geninfo_all_blocks=1 00:04:31.387 --rc geninfo_unexecuted_blocks=1 00:04:31.387 00:04:31.387 ' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.387 --rc genhtml_branch_coverage=1 00:04:31.387 --rc genhtml_function_coverage=1 00:04:31.387 --rc genhtml_legend=1 00:04:31.387 --rc geninfo_all_blocks=1 00:04:31.387 --rc geninfo_unexecuted_blocks=1 00:04:31.387 00:04:31.387 ' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.387 --rc genhtml_branch_coverage=1 00:04:31.387 --rc genhtml_function_coverage=1 00:04:31.387 --rc genhtml_legend=1 00:04:31.387 --rc geninfo_all_blocks=1 00:04:31.387 --rc geninfo_unexecuted_blocks=1 00:04:31.387 00:04:31.387 ' 00:04:31.387 16:56:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:31.387 16:56:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57601 00:04:31.387 16:56:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.387 16:56:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57601 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57601 ']' 00:04:31.387 16:56:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.388 16:56:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.388 16:56:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.388 16:56:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.388 16:56:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.646 [2024-11-20 16:56:55.266216] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:31.646 [2024-11-20 16:56:55.266910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57601 ] 00:04:31.646 [2024-11-20 16:56:55.457284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.906 [2024-11-20 16:56:55.581115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.844 16:56:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:32.844 16:56:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57601 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57601 ']' 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57601 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.844 16:56:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57601 00:04:33.102 killing process with pid 57601 00:04:33.102 16:56:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.102 16:56:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.102 16:56:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57601' 00:04:33.102 16:56:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 57601 00:04:33.102 16:56:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 57601 00:04:35.006 ************************************ 00:04:35.006 END TEST alias_rpc 00:04:35.006 ************************************ 00:04:35.006 00:04:35.006 real 0m3.805s 00:04:35.006 user 0m3.909s 00:04:35.006 sys 0m0.644s 00:04:35.006 16:56:58 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.006 16:56:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 16:56:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.006 16:56:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.006 16:56:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.006 16:56:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.006 16:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 ************************************ 00:04:35.006 START TEST spdkcli_tcp 00:04:35.006 ************************************ 00:04:35.006 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:35.265 * Looking for test storage... 00:04:35.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.265 16:56:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.265 --rc genhtml_branch_coverage=1 00:04:35.265 --rc genhtml_function_coverage=1 00:04:35.265 --rc genhtml_legend=1 00:04:35.265 --rc geninfo_all_blocks=1 00:04:35.265 --rc geninfo_unexecuted_blocks=1 00:04:35.265 00:04:35.265 ' 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.265 --rc genhtml_branch_coverage=1 00:04:35.265 --rc genhtml_function_coverage=1 00:04:35.265 --rc genhtml_legend=1 00:04:35.265 --rc geninfo_all_blocks=1 00:04:35.265 --rc geninfo_unexecuted_blocks=1 00:04:35.265 00:04:35.265 ' 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.265 --rc genhtml_branch_coverage=1 00:04:35.265 --rc genhtml_function_coverage=1 00:04:35.265 --rc genhtml_legend=1 00:04:35.265 --rc geninfo_all_blocks=1 00:04:35.265 --rc geninfo_unexecuted_blocks=1 00:04:35.265 00:04:35.265 ' 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.265 --rc genhtml_branch_coverage=1 00:04:35.265 --rc genhtml_function_coverage=1 00:04:35.265 --rc genhtml_legend=1 00:04:35.265 --rc geninfo_all_blocks=1 00:04:35.265 --rc geninfo_unexecuted_blocks=1 00:04:35.265 00:04:35.265 ' 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.265 16:56:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.265 16:56:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.265 16:56:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57708 00:04:35.265 16:56:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.265 16:56:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57708 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57708 ']' 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.265 16:56:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.532 [2024-11-20 16:56:59.150827] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:35.532 [2024-11-20 16:56:59.151221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57708 ] 00:04:35.532 [2024-11-20 16:56:59.337899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.800 [2024-11-20 16:56:59.468161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.800 [2024-11-20 16:56:59.468171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.737 16:57:00 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.737 16:57:00 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:36.737 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:36.737 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57725 00:04:36.737 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:36.737 [ 00:04:36.737 "bdev_malloc_delete", 00:04:36.737 "bdev_malloc_create", 00:04:36.737 "bdev_null_resize", 00:04:36.737 "bdev_null_delete", 00:04:36.737 "bdev_null_create", 00:04:36.737 "bdev_nvme_cuse_unregister", 00:04:36.737 "bdev_nvme_cuse_register", 00:04:36.737 "bdev_opal_new_user", 00:04:36.737 "bdev_opal_set_lock_state", 00:04:36.737 "bdev_opal_delete", 00:04:36.737 "bdev_opal_get_info", 00:04:36.737 "bdev_opal_create", 00:04:36.737 "bdev_nvme_opal_revert", 00:04:36.737 "bdev_nvme_opal_init", 00:04:36.737 "bdev_nvme_send_cmd", 00:04:36.737 "bdev_nvme_set_keys", 00:04:36.737 "bdev_nvme_get_path_iostat", 00:04:36.737 "bdev_nvme_get_mdns_discovery_info", 00:04:36.737 "bdev_nvme_stop_mdns_discovery", 00:04:36.737 "bdev_nvme_start_mdns_discovery", 00:04:36.737 "bdev_nvme_set_multipath_policy", 00:04:36.737 "bdev_nvme_set_preferred_path", 00:04:36.737 "bdev_nvme_get_io_paths", 00:04:36.737 "bdev_nvme_remove_error_injection", 00:04:36.737 "bdev_nvme_add_error_injection", 00:04:36.737 "bdev_nvme_get_discovery_info", 00:04:36.737 "bdev_nvme_stop_discovery", 00:04:36.737 "bdev_nvme_start_discovery", 00:04:36.738 "bdev_nvme_get_controller_health_info", 00:04:36.738 "bdev_nvme_disable_controller", 00:04:36.738 "bdev_nvme_enable_controller", 00:04:36.738 "bdev_nvme_reset_controller", 00:04:36.738 "bdev_nvme_get_transport_statistics", 00:04:36.738 "bdev_nvme_apply_firmware", 00:04:36.738 "bdev_nvme_detach_controller", 00:04:36.738 "bdev_nvme_get_controllers", 00:04:36.738 "bdev_nvme_attach_controller", 00:04:36.738 "bdev_nvme_set_hotplug", 00:04:36.738 "bdev_nvme_set_options", 00:04:36.738 "bdev_passthru_delete", 00:04:36.738 "bdev_passthru_create", 00:04:36.738 "bdev_lvol_set_parent_bdev", 00:04:36.738 "bdev_lvol_set_parent", 00:04:36.738 "bdev_lvol_check_shallow_copy", 00:04:36.738 "bdev_lvol_start_shallow_copy", 00:04:36.738 "bdev_lvol_grow_lvstore", 00:04:36.738 "bdev_lvol_get_lvols", 00:04:36.738 "bdev_lvol_get_lvstores", 00:04:36.738 "bdev_lvol_delete", 00:04:36.738 "bdev_lvol_set_read_only", 00:04:36.738 "bdev_lvol_resize", 00:04:36.738 "bdev_lvol_decouple_parent", 00:04:36.738 "bdev_lvol_inflate", 00:04:36.738 "bdev_lvol_rename", 00:04:36.738 "bdev_lvol_clone_bdev", 00:04:36.738 "bdev_lvol_clone", 00:04:36.738 "bdev_lvol_snapshot", 00:04:36.738 "bdev_lvol_create", 00:04:36.738 "bdev_lvol_delete_lvstore", 00:04:36.738 "bdev_lvol_rename_lvstore", 00:04:36.738 "bdev_lvol_create_lvstore", 00:04:36.738 "bdev_raid_set_options", 00:04:36.738 "bdev_raid_remove_base_bdev", 00:04:36.738 "bdev_raid_add_base_bdev", 00:04:36.738 "bdev_raid_delete", 00:04:36.738 "bdev_raid_create", 00:04:36.738 "bdev_raid_get_bdevs", 00:04:36.738 "bdev_error_inject_error", 00:04:36.738 "bdev_error_delete", 00:04:36.738 "bdev_error_create", 00:04:36.738 "bdev_split_delete", 00:04:36.738 "bdev_split_create", 00:04:36.738 "bdev_delay_delete", 00:04:36.738 "bdev_delay_create", 00:04:36.738 "bdev_delay_update_latency", 00:04:36.738 "bdev_zone_block_delete", 00:04:36.738 "bdev_zone_block_create", 00:04:36.738 "blobfs_create", 00:04:36.738 "blobfs_detect", 00:04:36.738 "blobfs_set_cache_size", 00:04:36.738 "bdev_aio_delete", 00:04:36.738 "bdev_aio_rescan", 00:04:36.738 "bdev_aio_create", 00:04:36.738 "bdev_ftl_set_property", 00:04:36.738 "bdev_ftl_get_properties", 00:04:36.738 "bdev_ftl_get_stats", 00:04:36.738 "bdev_ftl_unmap", 00:04:36.738 "bdev_ftl_unload", 00:04:36.738 "bdev_ftl_delete", 00:04:36.738 "bdev_ftl_load", 00:04:36.738 "bdev_ftl_create", 00:04:36.738 "bdev_virtio_attach_controller", 00:04:36.738 "bdev_virtio_scsi_get_devices", 00:04:36.738 "bdev_virtio_detach_controller", 00:04:36.738 "bdev_virtio_blk_set_hotplug", 00:04:36.738 "bdev_iscsi_delete", 00:04:36.738 "bdev_iscsi_create", 00:04:36.738 "bdev_iscsi_set_options", 00:04:36.738 "accel_error_inject_error", 00:04:36.738 "ioat_scan_accel_module", 00:04:36.738 "dsa_scan_accel_module", 00:04:36.738 "iaa_scan_accel_module", 00:04:36.738 "keyring_file_remove_key", 00:04:36.738 "keyring_file_add_key", 00:04:36.738 "keyring_linux_set_options", 00:04:36.738 "fsdev_aio_delete", 00:04:36.738 "fsdev_aio_create", 00:04:36.738 "iscsi_get_histogram", 00:04:36.738 "iscsi_enable_histogram", 00:04:36.738 "iscsi_set_options", 00:04:36.738 "iscsi_get_auth_groups", 00:04:36.738 "iscsi_auth_group_remove_secret", 00:04:36.738 "iscsi_auth_group_add_secret", 00:04:36.738 "iscsi_delete_auth_group", 00:04:36.738 "iscsi_create_auth_group", 00:04:36.738 "iscsi_set_discovery_auth", 00:04:36.738 "iscsi_get_options", 00:04:36.738 "iscsi_target_node_request_logout", 00:04:36.738 "iscsi_target_node_set_redirect", 00:04:36.738 "iscsi_target_node_set_auth", 00:04:36.738 "iscsi_target_node_add_lun", 00:04:36.738 "iscsi_get_stats", 00:04:36.738 "iscsi_get_connections", 00:04:36.738 "iscsi_portal_group_set_auth", 00:04:36.738 "iscsi_start_portal_group", 00:04:36.738 "iscsi_delete_portal_group", 00:04:36.738 "iscsi_create_portal_group", 00:04:36.738 "iscsi_get_portal_groups", 00:04:36.738 "iscsi_delete_target_node", 00:04:36.738 "iscsi_target_node_remove_pg_ig_maps", 00:04:36.738 "iscsi_target_node_add_pg_ig_maps", 00:04:36.738 "iscsi_create_target_node", 00:04:36.738 "iscsi_get_target_nodes", 00:04:36.738 "iscsi_delete_initiator_group", 00:04:36.738 "iscsi_initiator_group_remove_initiators", 00:04:36.738 "iscsi_initiator_group_add_initiators", 00:04:36.738 "iscsi_create_initiator_group", 00:04:36.738 "iscsi_get_initiator_groups", 00:04:36.738 "nvmf_set_crdt", 00:04:36.738 "nvmf_set_config", 00:04:36.738 "nvmf_set_max_subsystems", 00:04:36.738 "nvmf_stop_mdns_prr", 00:04:36.738 "nvmf_publish_mdns_prr", 00:04:36.738 "nvmf_subsystem_get_listeners", 00:04:36.738 "nvmf_subsystem_get_qpairs", 00:04:36.738 "nvmf_subsystem_get_controllers", 00:04:36.738 "nvmf_get_stats", 00:04:36.738 "nvmf_get_transports", 00:04:36.738 "nvmf_create_transport", 00:04:36.738 "nvmf_get_targets", 00:04:36.738 "nvmf_delete_target", 00:04:36.738 "nvmf_create_target", 00:04:36.738 "nvmf_subsystem_allow_any_host", 00:04:36.738 "nvmf_subsystem_set_keys", 00:04:36.738 "nvmf_subsystem_remove_host", 00:04:36.738 "nvmf_subsystem_add_host", 00:04:36.738 "nvmf_ns_remove_host", 00:04:36.738 "nvmf_ns_add_host", 00:04:36.738 "nvmf_subsystem_remove_ns", 00:04:36.738 "nvmf_subsystem_set_ns_ana_group", 00:04:36.738 "nvmf_subsystem_add_ns", 00:04:36.738 "nvmf_subsystem_listener_set_ana_state", 00:04:36.738 "nvmf_discovery_get_referrals", 00:04:36.738 "nvmf_discovery_remove_referral", 00:04:36.738 "nvmf_discovery_add_referral", 00:04:36.738 "nvmf_subsystem_remove_listener", 00:04:36.738 "nvmf_subsystem_add_listener", 00:04:36.738 "nvmf_delete_subsystem", 00:04:36.738 "nvmf_create_subsystem", 00:04:36.738 "nvmf_get_subsystems", 00:04:36.738 "env_dpdk_get_mem_stats", 00:04:36.738 "nbd_get_disks", 00:04:36.738 "nbd_stop_disk", 00:04:36.738 "nbd_start_disk", 00:04:36.738 "ublk_recover_disk", 00:04:36.738 "ublk_get_disks", 00:04:36.738 "ublk_stop_disk", 00:04:36.738 "ublk_start_disk", 00:04:36.738 "ublk_destroy_target", 00:04:36.738 "ublk_create_target", 00:04:36.738 "virtio_blk_create_transport", 00:04:36.738 "virtio_blk_get_transports", 00:04:36.738 "vhost_controller_set_coalescing", 00:04:36.738 "vhost_get_controllers", 00:04:36.738 "vhost_delete_controller", 00:04:36.738 "vhost_create_blk_controller", 00:04:36.738 "vhost_scsi_controller_remove_target", 00:04:36.738 "vhost_scsi_controller_add_target", 00:04:36.738 "vhost_start_scsi_controller", 00:04:36.738 "vhost_create_scsi_controller", 00:04:36.738 "thread_set_cpumask", 00:04:36.738 "scheduler_set_options", 00:04:36.738 "framework_get_governor", 00:04:36.738 "framework_get_scheduler", 00:04:36.738 "framework_set_scheduler", 00:04:36.738 "framework_get_reactors", 00:04:36.738 "thread_get_io_channels", 00:04:36.738 "thread_get_pollers", 00:04:36.738 "thread_get_stats", 00:04:36.738 "framework_monitor_context_switch", 00:04:36.738 "spdk_kill_instance", 00:04:36.738 "log_enable_timestamps", 00:04:36.738 "log_get_flags", 00:04:36.738 "log_clear_flag", 00:04:36.738 "log_set_flag", 00:04:36.738 "log_get_level", 00:04:36.738 "log_set_level", 00:04:36.738 "log_get_print_level", 00:04:36.738 "log_set_print_level", 00:04:36.738 "framework_enable_cpumask_locks", 00:04:36.738 "framework_disable_cpumask_locks", 00:04:36.738 "framework_wait_init", 00:04:36.738 "framework_start_init", 00:04:36.738 "scsi_get_devices", 00:04:36.738 "bdev_get_histogram", 00:04:36.738 "bdev_enable_histogram", 00:04:36.738 "bdev_set_qos_limit", 00:04:36.738 "bdev_set_qd_sampling_period", 00:04:36.738 "bdev_get_bdevs", 00:04:36.738 "bdev_reset_iostat", 00:04:36.738 "bdev_get_iostat", 00:04:36.738 "bdev_examine", 00:04:36.738 "bdev_wait_for_examine", 00:04:36.738 "bdev_set_options", 00:04:36.738 "accel_get_stats", 00:04:36.738 "accel_set_options", 00:04:36.738 "accel_set_driver", 00:04:36.738 "accel_crypto_key_destroy", 00:04:36.738 "accel_crypto_keys_get", 00:04:36.738 "accel_crypto_key_create", 00:04:36.738 "accel_assign_opc", 00:04:36.738 "accel_get_module_info", 00:04:36.738 "accel_get_opc_assignments", 00:04:36.738 "vmd_rescan", 00:04:36.738 "vmd_remove_device", 00:04:36.738 "vmd_enable", 00:04:36.738 "sock_get_default_impl", 00:04:36.738 "sock_set_default_impl", 00:04:36.738 "sock_impl_set_options", 00:04:36.738 "sock_impl_get_options", 00:04:36.738 "iobuf_get_stats", 00:04:36.738 "iobuf_set_options", 00:04:36.738 "keyring_get_keys", 00:04:36.738 "framework_get_pci_devices", 00:04:36.738 "framework_get_config", 00:04:36.738 "framework_get_subsystems", 00:04:36.738 "fsdev_set_opts", 00:04:36.738 "fsdev_get_opts", 00:04:36.738 "trace_get_info", 00:04:36.738 "trace_get_tpoint_group_mask", 00:04:36.738 "trace_disable_tpoint_group", 00:04:36.738 "trace_enable_tpoint_group", 00:04:36.738 "trace_clear_tpoint_mask", 00:04:36.738 "trace_set_tpoint_mask", 00:04:36.738 "notify_get_notifications", 00:04:36.738 "notify_get_types", 00:04:36.738 "spdk_get_version", 00:04:36.738 "rpc_get_methods" 00:04:36.738 ] 00:04:36.738 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:36.738 16:57:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.738 16:57:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:36.738 16:57:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57708 00:04:36.738 16:57:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57708 ']' 00:04:36.738 16:57:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57708 00:04:36.738 16:57:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:36.739 16:57:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.739 16:57:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57708 00:04:36.998 killing process with pid 57708 00:04:36.998 16:57:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.998 16:57:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.998 16:57:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57708' 00:04:36.998 16:57:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57708 00:04:36.998 16:57:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57708 00:04:38.952 ************************************ 00:04:38.952 END TEST spdkcli_tcp 00:04:38.952 ************************************ 00:04:38.952 00:04:38.952 real 0m3.957s 00:04:38.952 user 0m7.104s 00:04:38.952 sys 0m0.667s 00:04:38.952 16:57:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.952 16:57:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.952 16:57:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.952 16:57:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.952 16:57:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.952 16:57:02 -- common/autotest_common.sh@10 -- # set +x 00:04:39.212 ************************************ 00:04:39.212 START TEST dpdk_mem_utility 00:04:39.212 ************************************ 00:04:39.212 16:57:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.212 * Looking for test storage... 00:04:39.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:39.212 16:57:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.212 16:57:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.212 16:57:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.212 16:57:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.212 16:57:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:39.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.212 16:57:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.212 --rc genhtml_branch_coverage=1 00:04:39.212 --rc genhtml_function_coverage=1 00:04:39.212 --rc genhtml_legend=1 00:04:39.212 --rc geninfo_all_blocks=1 00:04:39.212 --rc geninfo_unexecuted_blocks=1 00:04:39.212 00:04:39.212 ' 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.212 --rc genhtml_branch_coverage=1 00:04:39.212 --rc genhtml_function_coverage=1 00:04:39.212 --rc genhtml_legend=1 00:04:39.212 --rc geninfo_all_blocks=1 00:04:39.212 --rc geninfo_unexecuted_blocks=1 00:04:39.212 00:04:39.212 ' 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.212 --rc genhtml_branch_coverage=1 00:04:39.212 --rc genhtml_function_coverage=1 00:04:39.212 --rc genhtml_legend=1 00:04:39.212 --rc geninfo_all_blocks=1 00:04:39.212 --rc geninfo_unexecuted_blocks=1 00:04:39.212 00:04:39.212 ' 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.212 --rc genhtml_branch_coverage=1 00:04:39.212 --rc genhtml_function_coverage=1 00:04:39.212 --rc genhtml_legend=1 00:04:39.212 --rc geninfo_all_blocks=1 00:04:39.212 --rc geninfo_unexecuted_blocks=1 00:04:39.212 00:04:39.212 ' 00:04:39.212 16:57:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.212 16:57:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57830 00:04:39.212 16:57:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57830 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57830 ']' 00:04:39.212 16:57:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.212 16:57:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 [2024-11-20 16:57:03.117466] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:39.472 [2024-11-20 16:57:03.117863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57830 ] 00:04:39.472 [2024-11-20 16:57:03.293086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.732 [2024-11-20 16:57:03.412720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.673 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.673 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:40.673 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.673 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.673 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.673 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.673 { 00:04:40.673 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.673 } 00:04:40.673 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.673 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:40.673 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:40.673 1 heaps totaling size 824.000000 MiB 00:04:40.673 size: 824.000000 MiB heap id: 0 00:04:40.673 end heaps---------- 00:04:40.673 9 mempools totaling size 603.782043 MiB 00:04:40.673 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.673 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.673 size: 100.555481 MiB name: bdev_io_57830 00:04:40.673 size: 50.003479 MiB name: msgpool_57830 00:04:40.673 size: 36.509338 MiB name: fsdev_io_57830 00:04:40.673 size: 21.763794 MiB name: PDU_Pool 00:04:40.673 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.673 size: 4.133484 MiB name: evtpool_57830 00:04:40.673 size: 0.026123 MiB name: Session_Pool 00:04:40.673 end mempools------- 00:04:40.673 6 memzones totaling size 4.142822 MiB 00:04:40.673 size: 1.000366 MiB name: RG_ring_0_57830 00:04:40.673 size: 1.000366 MiB name: RG_ring_1_57830 00:04:40.673 size: 1.000366 MiB name: RG_ring_4_57830 00:04:40.673 size: 1.000366 MiB name: RG_ring_5_57830 00:04:40.673 size: 0.125366 MiB name: RG_ring_2_57830 00:04:40.673 size: 0.015991 MiB name: RG_ring_3_57830 00:04:40.673 end memzones------- 00:04:40.673 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.673 heap id: 0 total size: 824.000000 MiB number of busy elements: 315 number of free elements: 18 00:04:40.673 list of free elements. size: 16.781372 MiB 00:04:40.673 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:40.673 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:40.673 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:40.673 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:40.673 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:40.673 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:40.673 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:40.673 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:40.673 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:40.673 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:40.673 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:40.673 element at address: 0x20001b400000 with size: 0.562927 MiB 00:04:40.673 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:40.673 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:40.673 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:40.673 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:40.673 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:40.673 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:40.673 list of standard malloc elements. size: 199.287720 MiB 00:04:40.673 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:40.673 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:40.673 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:40.673 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:40.673 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:40.673 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:40.674 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:40.674 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:40.674 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:40.674 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:40.674 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:40.674 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:40.674 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:40.675 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:40.675 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:40.675 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:40.676 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:40.676 list of memzone associated elements. size: 607.930908 MiB 00:04:40.676 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:40.676 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.676 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:40.676 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.676 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:40.676 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57830_0 00:04:40.676 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:40.676 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57830_0 00:04:40.676 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:40.676 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57830_0 00:04:40.676 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:40.676 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.676 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:40.676 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.676 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:40.676 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57830_0 00:04:40.676 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:40.676 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57830 00:04:40.676 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:40.676 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57830 00:04:40.676 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:40.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.676 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:40.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.676 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:40.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.676 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:40.676 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.676 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:40.676 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57830 00:04:40.676 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:40.676 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57830 00:04:40.676 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:40.676 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57830 00:04:40.676 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:40.676 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57830 00:04:40.676 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:40.676 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57830 00:04:40.676 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:40.676 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57830 00:04:40.676 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:40.676 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.676 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:40.676 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.676 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:40.676 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.676 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:40.676 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57830 00:04:40.676 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:40.676 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57830 00:04:40.676 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:40.676 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.676 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:40.676 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.676 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:40.676 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57830 00:04:40.676 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:40.676 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.676 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:40.676 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57830 00:04:40.676 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:40.676 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57830 00:04:40.676 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:40.676 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57830 00:04:40.676 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:40.676 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.676 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.676 16:57:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57830 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57830 ']' 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57830 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57830 00:04:40.676 killing process with pid 57830 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57830' 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57830 00:04:40.676 16:57:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57830 00:04:43.213 00:04:43.213 real 0m3.833s 00:04:43.213 user 0m3.788s 00:04:43.213 sys 0m0.683s 00:04:43.213 16:57:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.213 16:57:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 ************************************ 00:04:43.213 END TEST dpdk_mem_utility 00:04:43.213 ************************************ 00:04:43.213 16:57:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.213 16:57:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.213 16:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.213 16:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 ************************************ 00:04:43.213 START TEST event 00:04:43.213 ************************************ 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.213 * Looking for test storage... 00:04:43.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.213 16:57:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.213 16:57:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.213 16:57:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.213 16:57:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.213 16:57:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.213 16:57:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.213 16:57:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.213 16:57:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.213 16:57:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.213 16:57:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.213 16:57:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.213 16:57:06 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.213 16:57:06 event -- scripts/common.sh@345 -- # : 1 00:04:43.213 16:57:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.213 16:57:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.213 16:57:06 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.213 16:57:06 event -- scripts/common.sh@353 -- # local d=1 00:04:43.213 16:57:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.213 16:57:06 event -- scripts/common.sh@355 -- # echo 1 00:04:43.213 16:57:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.213 16:57:06 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.213 16:57:06 event -- scripts/common.sh@353 -- # local d=2 00:04:43.213 16:57:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.213 16:57:06 event -- scripts/common.sh@355 -- # echo 2 00:04:43.213 16:57:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.213 16:57:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.213 16:57:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.213 16:57:06 event -- scripts/common.sh@368 -- # return 0 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.213 --rc genhtml_branch_coverage=1 00:04:43.213 --rc genhtml_function_coverage=1 00:04:43.213 --rc genhtml_legend=1 00:04:43.213 --rc geninfo_all_blocks=1 00:04:43.213 --rc geninfo_unexecuted_blocks=1 00:04:43.213 00:04:43.213 ' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.213 --rc genhtml_branch_coverage=1 00:04:43.213 --rc genhtml_function_coverage=1 00:04:43.213 --rc genhtml_legend=1 00:04:43.213 --rc geninfo_all_blocks=1 00:04:43.213 --rc geninfo_unexecuted_blocks=1 00:04:43.213 00:04:43.213 ' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.213 --rc genhtml_branch_coverage=1 00:04:43.213 --rc genhtml_function_coverage=1 00:04:43.213 --rc genhtml_legend=1 00:04:43.213 --rc geninfo_all_blocks=1 00:04:43.213 --rc geninfo_unexecuted_blocks=1 00:04:43.213 00:04:43.213 ' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.213 --rc genhtml_branch_coverage=1 00:04:43.213 --rc genhtml_function_coverage=1 00:04:43.213 --rc genhtml_legend=1 00:04:43.213 --rc geninfo_all_blocks=1 00:04:43.213 --rc geninfo_unexecuted_blocks=1 00:04:43.213 00:04:43.213 ' 00:04:43.213 16:57:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.213 16:57:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.213 16:57:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:43.213 16:57:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.213 16:57:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.213 ************************************ 00:04:43.213 START TEST event_perf 00:04:43.213 ************************************ 00:04:43.213 16:57:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.213 Running I/O for 1 seconds...[2024-11-20 16:57:06.951174] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:43.213 [2024-11-20 16:57:06.951536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:04:43.472 [2024-11-20 16:57:07.146990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.472 [2024-11-20 16:57:07.261457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.472 [2024-11-20 16:57:07.261621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.472 [2024-11-20 16:57:07.261734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.472 [2024-11-20 16:57:07.261897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.849 Running I/O for 1 seconds... 00:04:44.849 lcore 0: 202527 00:04:44.849 lcore 1: 202526 00:04:44.849 lcore 2: 202526 00:04:44.849 lcore 3: 202528 00:04:44.849 done. 00:04:44.849 ************************************ 00:04:44.849 END TEST event_perf 00:04:44.849 ************************************ 00:04:44.849 00:04:44.849 real 0m1.602s 00:04:44.849 user 0m4.355s 00:04:44.849 sys 0m0.125s 00:04:44.849 16:57:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.849 16:57:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.849 16:57:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.849 16:57:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:44.849 16:57:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.849 16:57:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.849 ************************************ 00:04:44.849 START TEST event_reactor 00:04:44.849 ************************************ 00:04:44.849 16:57:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.849 [2024-11-20 16:57:08.601799] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:44.849 [2024-11-20 16:57:08.601983] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57972 ] 00:04:45.108 [2024-11-20 16:57:08.783793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.108 [2024-11-20 16:57:08.910680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.486 test_start 00:04:46.486 oneshot 00:04:46.486 tick 100 00:04:46.486 tick 100 00:04:46.486 tick 250 00:04:46.486 tick 100 00:04:46.486 tick 100 00:04:46.486 tick 100 00:04:46.486 tick 250 00:04:46.486 tick 500 00:04:46.486 tick 100 00:04:46.486 tick 100 00:04:46.486 tick 250 00:04:46.486 tick 100 00:04:46.486 tick 100 00:04:46.486 test_end 00:04:46.486 00:04:46.486 real 0m1.577s 00:04:46.486 user 0m1.361s 00:04:46.486 sys 0m0.107s 00:04:46.486 16:57:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.486 ************************************ 00:04:46.486 END TEST event_reactor 00:04:46.486 ************************************ 00:04:46.486 16:57:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.486 16:57:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.486 16:57:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.486 16:57:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.486 16:57:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.486 ************************************ 00:04:46.486 START TEST event_reactor_perf 00:04:46.486 ************************************ 00:04:46.486 16:57:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.486 [2024-11-20 16:57:10.237180] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:46.486 [2024-11-20 16:57:10.237612] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:04:46.745 [2024-11-20 16:57:10.422844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.745 [2024-11-20 16:57:10.541895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.124 test_start 00:04:48.124 test_end 00:04:48.124 Performance: 311423 events per second 00:04:48.124 ************************************ 00:04:48.124 END TEST event_reactor_perf 00:04:48.124 00:04:48.124 real 0m1.573s 00:04:48.124 user 0m1.365s 00:04:48.124 sys 0m0.099s 00:04:48.124 16:57:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.124 16:57:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.124 ************************************ 00:04:48.124 16:57:11 event -- event/event.sh@49 -- # uname -s 00:04:48.124 16:57:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:48.124 16:57:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.124 16:57:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.124 16:57:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.124 16:57:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.124 ************************************ 00:04:48.124 START TEST event_scheduler 00:04:48.124 ************************************ 00:04:48.124 16:57:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.124 * Looking for test storage... 00:04:48.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:48.124 16:57:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.124 16:57:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.124 16:57:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.124 16:57:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:48.124 16:57:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:48.384 16:57:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.384 16:57:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.384 --rc genhtml_branch_coverage=1 00:04:48.384 --rc genhtml_function_coverage=1 00:04:48.384 --rc genhtml_legend=1 00:04:48.384 --rc geninfo_all_blocks=1 00:04:48.384 --rc geninfo_unexecuted_blocks=1 00:04:48.384 00:04:48.384 ' 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.384 --rc genhtml_branch_coverage=1 00:04:48.384 --rc genhtml_function_coverage=1 00:04:48.384 --rc genhtml_legend=1 00:04:48.384 --rc geninfo_all_blocks=1 00:04:48.384 --rc geninfo_unexecuted_blocks=1 00:04:48.384 00:04:48.384 ' 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.384 --rc genhtml_branch_coverage=1 00:04:48.384 --rc genhtml_function_coverage=1 00:04:48.384 --rc genhtml_legend=1 00:04:48.384 --rc geninfo_all_blocks=1 00:04:48.384 --rc geninfo_unexecuted_blocks=1 00:04:48.384 00:04:48.384 ' 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.384 --rc genhtml_branch_coverage=1 00:04:48.384 --rc genhtml_function_coverage=1 00:04:48.384 --rc genhtml_legend=1 00:04:48.384 --rc geninfo_all_blocks=1 00:04:48.384 --rc geninfo_unexecuted_blocks=1 00:04:48.384 00:04:48.384 ' 00:04:48.384 16:57:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:48.384 16:57:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58090 00:04:48.384 16:57:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:48.384 16:57:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.384 16:57:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58090 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58090 ']' 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.384 16:57:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.385 16:57:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.385 [2024-11-20 16:57:12.119414] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:48.385 [2024-11-20 16:57:12.119853] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58090 ] 00:04:48.644 [2024-11-20 16:57:12.324159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.644 [2024-11-20 16:57:12.487872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.644 [2024-11-20 16:57:12.488017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.644 [2024-11-20 16:57:12.488174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.644 [2024-11-20 16:57:12.488185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.581 16:57:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.581 16:57:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:49.581 16:57:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.581 16:57:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.581 16:57:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.581 POWER: Cannot set governor of lcore 0 to performance 00:04:49.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.581 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:49.581 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:49.581 POWER: Unable to set Power Management Environment for lcore 0 00:04:49.581 [2024-11-20 16:57:13.087210] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:49.581 [2024-11-20 16:57:13.087237] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:49.581 [2024-11-20 16:57:13.087251] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.581 [2024-11-20 16:57:13.087276] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.581 [2024-11-20 16:57:13.087288] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.581 [2024-11-20 16:57:13.087301] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.582 16:57:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 [2024-11-20 16:57:13.411103] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.582 16:57:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.582 16:57:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 ************************************ 00:04:49.582 START TEST scheduler_create_thread 00:04:49.582 ************************************ 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 2 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.582 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.582 3 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 4 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 5 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 6 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 7 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 8 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 9 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 10 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.841 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.842 16:57:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.220 16:57:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.220 16:57:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.220 16:57:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.220 16:57:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.220 16:57:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.188 ************************************ 00:04:52.188 END TEST scheduler_create_thread 00:04:52.188 ************************************ 00:04:52.188 16:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.188 00:04:52.188 real 0m2.621s 00:04:52.188 user 0m0.019s 00:04:52.188 sys 0m0.005s 00:04:52.188 16:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.188 16:57:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.446 16:57:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.446 16:57:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58090 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58090 ']' 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58090 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58090 00:04:52.446 killing process with pid 58090 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58090' 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58090 00:04:52.446 16:57:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58090 00:04:52.704 [2024-11-20 16:57:16.524937] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.080 ************************************ 00:04:54.080 END TEST event_scheduler 00:04:54.080 ************************************ 00:04:54.080 00:04:54.080 real 0m5.710s 00:04:54.080 user 0m9.848s 00:04:54.080 sys 0m0.535s 00:04:54.080 16:57:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.080 16:57:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.080 16:57:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.080 16:57:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.080 16:57:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.080 16:57:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.080 16:57:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.080 ************************************ 00:04:54.080 START TEST app_repeat 00:04:54.080 ************************************ 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:54.080 Process app_repeat pid: 58196 00:04:54.080 spdk_app_start Round 0 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58196 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58196' 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.080 16:57:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58196 ']' 00:04:54.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.080 16:57:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.080 [2024-11-20 16:57:17.650886] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:04:54.080 [2024-11-20 16:57:17.651065] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58196 ] 00:04:54.080 [2024-11-20 16:57:17.833779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.339 [2024-11-20 16:57:17.950273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.340 [2024-11-20 16:57:17.950284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.907 16:57:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.907 16:57:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.907 16:57:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.166 Malloc0 00:04:55.166 16:57:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.426 Malloc1 00:04:55.426 16:57:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.426 16:57:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.769 /dev/nbd0 00:04:55.769 16:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.769 16:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.769 1+0 records in 00:04:55.769 1+0 records out 00:04:55.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227161 s, 18.0 MB/s 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.769 16:57:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.769 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.769 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.769 16:57:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.029 /dev/nbd1 00:04:56.029 16:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.029 16:57:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.029 16:57:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:56.029 16:57:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.029 16:57:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.029 16:57:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.029 16:57:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.288 1+0 records in 00:04:56.288 1+0 records out 00:04:56.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334681 s, 12.2 MB/s 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.288 16:57:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.288 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.288 16:57:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.288 16:57:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.288 16:57:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.288 16:57:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.548 { 00:04:56.548 "nbd_device": "/dev/nbd0", 00:04:56.548 "bdev_name": "Malloc0" 00:04:56.548 }, 00:04:56.548 { 00:04:56.548 "nbd_device": "/dev/nbd1", 00:04:56.548 "bdev_name": "Malloc1" 00:04:56.548 } 00:04:56.548 ]' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.548 { 00:04:56.548 "nbd_device": "/dev/nbd0", 00:04:56.548 "bdev_name": "Malloc0" 00:04:56.548 }, 00:04:56.548 { 00:04:56.548 "nbd_device": "/dev/nbd1", 00:04:56.548 "bdev_name": "Malloc1" 00:04:56.548 } 00:04:56.548 ]' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.548 /dev/nbd1' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.548 /dev/nbd1' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.548 256+0 records in 00:04:56.548 256+0 records out 00:04:56.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103704 s, 101 MB/s 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.548 256+0 records in 00:04:56.548 256+0 records out 00:04:56.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290699 s, 36.1 MB/s 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.548 256+0 records in 00:04:56.548 256+0 records out 00:04:56.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354776 s, 29.6 MB/s 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.548 16:57:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.807 16:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.808 16:57:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.067 16:57:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.636 16:57:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.636 16:57:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.896 16:57:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:58.832 [2024-11-20 16:57:22.671203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.091 [2024-11-20 16:57:22.774750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.091 [2024-11-20 16:57:22.774812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.091 [2024-11-20 16:57:22.944908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.091 [2024-11-20 16:57:22.945002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.995 spdk_app_start Round 1 00:05:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.995 16:57:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.995 16:57:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.995 16:57:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58196 ']' 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.995 16:57:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.254 16:57:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.254 16:57:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.254 16:57:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.513 Malloc0 00:05:01.513 16:57:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.771 Malloc1 00:05:01.771 16:57:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.771 16:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.772 16:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.772 16:57:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.030 /dev/nbd0 00:05:02.030 16:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.030 16:57:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.030 16:57:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.031 1+0 records in 00:05:02.031 1+0 records out 00:05:02.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278853 s, 14.7 MB/s 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.031 16:57:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.031 16:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.031 16:57:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.031 16:57:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.289 /dev/nbd1 00:05:02.289 16:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.289 16:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.289 1+0 records in 00:05:02.289 1+0 records out 00:05:02.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285292 s, 14.4 MB/s 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.289 16:57:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.290 16:57:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.290 16:57:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.290 16:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.290 16:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.290 16:57:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.290 16:57:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.290 16:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.548 16:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.548 { 00:05:02.548 "nbd_device": "/dev/nbd0", 00:05:02.548 "bdev_name": "Malloc0" 00:05:02.548 }, 00:05:02.548 { 00:05:02.548 "nbd_device": "/dev/nbd1", 00:05:02.548 "bdev_name": "Malloc1" 00:05:02.548 } 00:05:02.548 ]' 00:05:02.548 16:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.548 { 00:05:02.548 "nbd_device": "/dev/nbd0", 00:05:02.548 "bdev_name": "Malloc0" 00:05:02.548 }, 00:05:02.548 { 00:05:02.548 "nbd_device": "/dev/nbd1", 00:05:02.548 "bdev_name": "Malloc1" 00:05:02.548 } 00:05:02.548 ]' 00:05:02.548 16:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.808 /dev/nbd1' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.808 /dev/nbd1' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.808 256+0 records in 00:05:02.808 256+0 records out 00:05:02.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963543 s, 109 MB/s 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.808 256+0 records in 00:05:02.808 256+0 records out 00:05:02.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023946 s, 43.8 MB/s 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.808 256+0 records in 00:05:02.808 256+0 records out 00:05:02.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286602 s, 36.6 MB/s 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.808 16:57:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.067 16:57:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.326 16:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.584 16:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.584 16:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.584 16:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.584 16:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.844 16:57:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.844 16:57:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.103 16:57:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.039 [2024-11-20 16:57:28.829609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.299 [2024-11-20 16:57:28.949196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.299 [2024-11-20 16:57:28.949204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.299 [2024-11-20 16:57:29.126034] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.299 [2024-11-20 16:57:29.126147] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.209 spdk_app_start Round 2 00:05:07.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.209 16:57:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.209 16:57:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:07.209 16:57:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58196 ']' 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.209 16:57:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.468 16:57:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.468 16:57:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.468 16:57:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.726 Malloc0 00:05:07.726 16:57:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.986 Malloc1 00:05:07.986 16:57:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.986 16:57:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.555 /dev/nbd0 00:05:08.555 16:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.555 16:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.555 1+0 records in 00:05:08.555 1+0 records out 00:05:08.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267192 s, 15.3 MB/s 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.555 16:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.555 16:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.555 16:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.555 16:57:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.815 /dev/nbd1 00:05:08.815 16:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.815 16:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.815 16:57:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.815 16:57:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.815 16:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.815 16:57:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.816 1+0 records in 00:05:08.816 1+0 records out 00:05:08.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240861 s, 17.0 MB/s 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.816 16:57:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.816 16:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.816 16:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.816 16:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.816 16:57:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.816 16:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.075 { 00:05:09.075 "nbd_device": "/dev/nbd0", 00:05:09.075 "bdev_name": "Malloc0" 00:05:09.075 }, 00:05:09.075 { 00:05:09.075 "nbd_device": "/dev/nbd1", 00:05:09.075 "bdev_name": "Malloc1" 00:05:09.075 } 00:05:09.075 ]' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.075 { 00:05:09.075 "nbd_device": "/dev/nbd0", 00:05:09.075 "bdev_name": "Malloc0" 00:05:09.075 }, 00:05:09.075 { 00:05:09.075 "nbd_device": "/dev/nbd1", 00:05:09.075 "bdev_name": "Malloc1" 00:05:09.075 } 00:05:09.075 ]' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.075 /dev/nbd1' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.075 /dev/nbd1' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.075 256+0 records in 00:05:09.075 256+0 records out 00:05:09.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010843 s, 96.7 MB/s 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.075 256+0 records in 00:05:09.075 256+0 records out 00:05:09.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236659 s, 44.3 MB/s 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.075 16:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.334 256+0 records in 00:05:09.334 256+0 records out 00:05:09.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304767 s, 34.4 MB/s 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.334 16:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.594 16:57:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.853 16:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.112 16:57:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.112 16:57:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.680 16:57:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.617 [2024-11-20 16:57:35.398577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.877 [2024-11-20 16:57:35.510051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.877 [2024-11-20 16:57:35.510075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.877 [2024-11-20 16:57:35.672921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.877 [2024-11-20 16:57:35.673048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.780 16:57:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58196 /var/tmp/spdk-nbd.sock 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58196 ']' 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.780 16:57:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.040 16:57:37 event.app_repeat -- event/event.sh@39 -- # killprocess 58196 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58196 ']' 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58196 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58196 00:05:14.040 killing process with pid 58196 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58196' 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58196 00:05:14.040 16:57:37 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58196 00:05:14.978 spdk_app_start is called in Round 0. 00:05:14.978 Shutdown signal received, stop current app iteration 00:05:14.978 Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 reinitialization... 00:05:14.978 spdk_app_start is called in Round 1. 00:05:14.978 Shutdown signal received, stop current app iteration 00:05:14.978 Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 reinitialization... 00:05:14.978 spdk_app_start is called in Round 2. 00:05:14.978 Shutdown signal received, stop current app iteration 00:05:14.978 Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 reinitialization... 00:05:14.978 spdk_app_start is called in Round 3. 00:05:14.978 Shutdown signal received, stop current app iteration 00:05:14.978 16:57:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.978 16:57:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.978 00:05:14.978 real 0m21.090s 00:05:14.979 user 0m46.564s 00:05:14.979 sys 0m3.054s 00:05:14.979 16:57:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.979 16:57:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.979 ************************************ 00:05:14.979 END TEST app_repeat 00:05:14.979 ************************************ 00:05:14.979 16:57:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.979 16:57:38 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.979 16:57:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.979 16:57:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.979 16:57:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.979 ************************************ 00:05:14.979 START TEST cpu_locks 00:05:14.979 ************************************ 00:05:14.979 16:57:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.979 * Looking for test storage... 00:05:14.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.979 16:57:38 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.979 16:57:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.979 16:57:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.238 16:57:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.238 16:57:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.239 16:57:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.239 --rc genhtml_branch_coverage=1 00:05:15.239 --rc genhtml_function_coverage=1 00:05:15.239 --rc genhtml_legend=1 00:05:15.239 --rc geninfo_all_blocks=1 00:05:15.239 --rc geninfo_unexecuted_blocks=1 00:05:15.239 00:05:15.239 ' 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.239 --rc genhtml_branch_coverage=1 00:05:15.239 --rc genhtml_function_coverage=1 00:05:15.239 --rc genhtml_legend=1 00:05:15.239 --rc geninfo_all_blocks=1 00:05:15.239 --rc geninfo_unexecuted_blocks=1 00:05:15.239 00:05:15.239 ' 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.239 --rc genhtml_branch_coverage=1 00:05:15.239 --rc genhtml_function_coverage=1 00:05:15.239 --rc genhtml_legend=1 00:05:15.239 --rc geninfo_all_blocks=1 00:05:15.239 --rc geninfo_unexecuted_blocks=1 00:05:15.239 00:05:15.239 ' 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.239 --rc genhtml_branch_coverage=1 00:05:15.239 --rc genhtml_function_coverage=1 00:05:15.239 --rc genhtml_legend=1 00:05:15.239 --rc geninfo_all_blocks=1 00:05:15.239 --rc geninfo_unexecuted_blocks=1 00:05:15.239 00:05:15.239 ' 00:05:15.239 16:57:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.239 16:57:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.239 16:57:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.239 16:57:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.239 16:57:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 ************************************ 00:05:15.239 START TEST default_locks 00:05:15.239 ************************************ 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58660 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58660 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58660 ']' 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.239 16:57:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 [2024-11-20 16:57:39.046582] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:15.239 [2024-11-20 16:57:39.046803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58660 ] 00:05:15.501 [2024-11-20 16:57:39.230600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.501 [2024-11-20 16:57:39.340945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.438 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.438 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:16.438 16:57:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58660 00:05:16.438 16:57:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58660 00:05:16.438 16:57:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58660 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58660 ']' 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58660 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58660 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.007 killing process with pid 58660 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58660' 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58660 00:05:17.007 16:57:40 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58660 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58660 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58660 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58660 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58660 ']' 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58660) - No such process 00:05:18.913 ERROR: process (pid: 58660) is no longer running 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:18.913 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.914 00:05:18.914 real 0m3.755s 00:05:18.914 user 0m3.780s 00:05:18.914 sys 0m0.725s 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.914 16:57:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.914 ************************************ 00:05:18.914 END TEST default_locks 00:05:18.914 ************************************ 00:05:18.914 16:57:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.914 16:57:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.914 16:57:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.914 16:57:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.914 ************************************ 00:05:18.914 START TEST default_locks_via_rpc 00:05:18.914 ************************************ 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58735 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58735 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58735 ']' 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.914 16:57:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.173 [2024-11-20 16:57:42.832041] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:19.173 [2024-11-20 16:57:42.832217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58735 ] 00:05:19.173 [2024-11-20 16:57:42.999877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.432 [2024-11-20 16:57:43.117856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.369 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58735 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58735 00:05:20.370 16:57:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58735 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58735 ']' 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58735 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58735 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58735' 00:05:20.629 killing process with pid 58735 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58735 00:05:20.629 16:57:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58735 00:05:22.532 00:05:22.532 real 0m3.604s 00:05:22.532 user 0m3.587s 00:05:22.532 sys 0m0.665s 00:05:22.532 16:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.532 16:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.532 ************************************ 00:05:22.532 END TEST default_locks_via_rpc 00:05:22.532 ************************************ 00:05:22.532 16:57:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.532 16:57:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.532 16:57:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.532 16:57:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.532 ************************************ 00:05:22.532 START TEST non_locking_app_on_locked_coremask 00:05:22.532 ************************************ 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58803 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58803 /var/tmp/spdk.sock 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58803 ']' 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.532 16:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.791 [2024-11-20 16:57:46.476001] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:22.792 [2024-11-20 16:57:46.476150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58803 ] 00:05:22.792 [2024-11-20 16:57:46.651276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.051 [2024-11-20 16:57:46.762128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58819 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58819 /var/tmp/spdk2.sock 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58819 ']' 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.988 16:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.988 [2024-11-20 16:57:47.609712] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:23.988 [2024-11-20 16:57:47.609880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58819 ] 00:05:23.988 [2024-11-20 16:57:47.796453] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.988 [2024-11-20 16:57:47.796516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.247 [2024-11-20 16:57:48.012265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.779 16:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.779 16:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.779 16:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58803 00:05:26.779 16:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58803 00:05:26.779 16:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58803 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58803 ']' 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58803 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58803 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.346 killing process with pid 58803 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58803' 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58803 00:05:27.346 16:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58803 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58819 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58819 ']' 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58819 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58819 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.531 killing process with pid 58819 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58819' 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58819 00:05:31.531 16:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58819 00:05:33.439 00:05:33.439 real 0m10.853s 00:05:33.439 user 0m11.332s 00:05:33.439 sys 0m1.444s 00:05:33.439 16:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.439 16:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 ************************************ 00:05:33.439 END TEST non_locking_app_on_locked_coremask 00:05:33.439 ************************************ 00:05:33.439 16:57:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.439 16:57:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.439 16:57:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.439 16:57:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 ************************************ 00:05:33.439 START TEST locking_app_on_unlocked_coremask 00:05:33.439 ************************************ 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58964 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58964 /var/tmp/spdk.sock 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58964 ']' 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.439 16:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.698 [2024-11-20 16:57:57.411435] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:33.698 [2024-11-20 16:57:57.411611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:05:33.957 [2024-11-20 16:57:57.591096] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.957 [2024-11-20 16:57:57.591160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.957 [2024-11-20 16:57:57.694927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58980 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58980 /var/tmp/spdk2.sock 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.895 16:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.895 [2024-11-20 16:57:58.605892] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:34.895 [2024-11-20 16:57:58.606065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:05:35.154 [2024-11-20 16:57:58.805609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.413 [2024-11-20 16:57:59.043741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.948 16:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.948 16:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.948 16:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58980 00:05:37.948 16:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58980 00:05:37.948 16:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58964 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58964 ']' 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58964 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.207 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58964 00:05:38.465 killing process with pid 58964 00:05:38.465 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.465 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.465 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58964' 00:05:38.465 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58964 00:05:38.465 16:58:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58964 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58980 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58980 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:05:42.659 killing process with pid 58980 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58980 00:05:42.659 16:58:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58980 00:05:44.564 00:05:44.564 real 0m10.851s 00:05:44.564 user 0m11.310s 00:05:44.564 sys 0m1.458s 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.564 ************************************ 00:05:44.564 END TEST locking_app_on_unlocked_coremask 00:05:44.564 ************************************ 00:05:44.564 16:58:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:44.564 16:58:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.564 16:58:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.564 16:58:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.564 ************************************ 00:05:44.564 START TEST locking_app_on_locked_coremask 00:05:44.564 ************************************ 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59126 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59126 /var/tmp/spdk.sock 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59126 ']' 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.564 16:58:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.564 [2024-11-20 16:58:08.319382] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:44.564 [2024-11-20 16:58:08.319554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:05:44.823 [2024-11-20 16:58:08.505815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.823 [2024-11-20 16:58:08.620608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59147 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59147 /var/tmp/spdk2.sock 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59147 /var/tmp/spdk2.sock 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59147 /var/tmp/spdk2.sock 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59147 ']' 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.759 16:58:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.759 [2024-11-20 16:58:09.570699] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:45.759 [2024-11-20 16:58:09.571233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:05:46.018 [2024-11-20 16:58:09.769508] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59126 has claimed it. 00:05:46.018 [2024-11-20 16:58:09.769602] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:46.601 ERROR: process (pid: 59147) is no longer running 00:05:46.601 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59147) - No such process 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59126 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59126 00:05:46.601 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59126 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59126 ']' 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59126 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59126 00:05:46.873 killing process with pid 59126 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59126' 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59126 00:05:46.873 16:58:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59126 00:05:48.788 ************************************ 00:05:48.788 END TEST locking_app_on_locked_coremask 00:05:48.788 ************************************ 00:05:48.788 00:05:48.788 real 0m4.382s 00:05:48.788 user 0m4.688s 00:05:48.788 sys 0m0.866s 00:05:48.788 16:58:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.788 16:58:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.788 16:58:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.788 16:58:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.788 16:58:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.788 16:58:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.788 ************************************ 00:05:48.788 START TEST locking_overlapped_coremask 00:05:48.788 ************************************ 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59211 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59211 /var/tmp/spdk.sock 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59211 ']' 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.788 16:58:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.047 [2024-11-20 16:58:12.763921] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:49.047 [2024-11-20 16:58:12.764110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:05:49.305 [2024-11-20 16:58:12.948611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.305 [2024-11-20 16:58:13.068568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.305 [2024-11-20 16:58:13.068707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.306 [2024-11-20 16:58:13.068724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59229 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59229 /var/tmp/spdk2.sock 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59229 /var/tmp/spdk2.sock 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59229 /var/tmp/spdk2.sock 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59229 ']' 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.244 16:58:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.244 [2024-11-20 16:58:14.048314] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:50.244 [2024-11-20 16:58:14.048867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:05:50.503 [2024-11-20 16:58:14.257412] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59211 has claimed it. 00:05:50.503 [2024-11-20 16:58:14.257488] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.072 ERROR: process (pid: 59229) is no longer running 00:05:51.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59229) - No such process 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59211 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59211 ']' 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59211 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59211 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59211' 00:05:51.072 killing process with pid 59211 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59211 00:05:51.072 16:58:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59211 00:05:53.608 00:05:53.608 real 0m4.234s 00:05:53.608 user 0m11.476s 00:05:53.608 sys 0m0.716s 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.608 ************************************ 00:05:53.608 END TEST locking_overlapped_coremask 00:05:53.608 ************************************ 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.608 16:58:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:53.608 16:58:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.608 16:58:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.608 16:58:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.608 ************************************ 00:05:53.608 START TEST locking_overlapped_coremask_via_rpc 00:05:53.608 ************************************ 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:53.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59293 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59293 /var/tmp/spdk.sock 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59293 ']' 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.608 16:58:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.608 [2024-11-20 16:58:17.040988] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:53.608 [2024-11-20 16:58:17.041168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:05:53.608 [2024-11-20 16:58:17.224225] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.608 [2024-11-20 16:58:17.224545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.608 [2024-11-20 16:58:17.343225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.608 [2024-11-20 16:58:17.343399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.608 [2024-11-20 16:58:17.343411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.545 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.545 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.545 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59311 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59311 /var/tmp/spdk2.sock 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59311 ']' 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.546 16:58:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.546 [2024-11-20 16:58:18.304672] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:05:54.546 [2024-11-20 16:58:18.305180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:05:54.804 [2024-11-20 16:58:18.502792] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.804 [2024-11-20 16:58:18.502870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.061 [2024-11-20 16:58:18.758989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.061 [2024-11-20 16:58:18.759040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.061 [2024-11-20 16:58:18.759021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.597 [2024-11-20 16:58:21.047014] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59293 has claimed it. 00:05:57.597 request: 00:05:57.597 { 00:05:57.597 "method": "framework_enable_cpumask_locks", 00:05:57.597 "req_id": 1 00:05:57.597 } 00:05:57.597 Got JSON-RPC error response 00:05:57.597 response: 00:05:57.597 { 00:05:57.597 "code": -32603, 00:05:57.597 "message": "Failed to claim CPU core: 2" 00:05:57.597 } 00:05:57.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59293 /var/tmp/spdk.sock 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59293 ']' 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59311 /var/tmp/spdk2.sock 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59311 ']' 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.597 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.856 ************************************ 00:05:57.856 END TEST locking_overlapped_coremask_via_rpc 00:05:57.856 ************************************ 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.856 00:05:57.856 real 0m4.702s 00:05:57.856 user 0m1.679s 00:05:57.856 sys 0m0.218s 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.856 16:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.856 16:58:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.856 16:58:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59293 ]] 00:05:57.856 16:58:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59293 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59293 ']' 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59293 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59293 00:05:57.856 killing process with pid 59293 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59293' 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59293 00:05:57.856 16:58:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59293 00:06:00.391 16:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59311 ]] 00:06:00.391 16:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59311 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59311 ']' 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59311 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59311 00:06:00.391 killing process with pid 59311 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59311' 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59311 00:06:00.391 16:58:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59311 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.300 Process with pid 59293 is not found 00:06:02.300 Process with pid 59311 is not found 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59293 ]] 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59293 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59293 ']' 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59293 00:06:02.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59293) - No such process 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59293 is not found' 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59311 ]] 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59311 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59311 ']' 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59311 00:06:02.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59311) - No such process 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59311 is not found' 00:06:02.300 16:58:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.300 00:06:02.300 real 0m47.261s 00:06:02.300 user 1m22.811s 00:06:02.300 sys 0m7.337s 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.300 16:58:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 ************************************ 00:06:02.300 END TEST cpu_locks 00:06:02.300 ************************************ 00:06:02.300 00:06:02.300 real 1m19.343s 00:06:02.300 user 2m26.530s 00:06:02.300 sys 0m11.531s 00:06:02.300 16:58:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.300 16:58:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 ************************************ 00:06:02.300 END TEST event 00:06:02.300 ************************************ 00:06:02.300 16:58:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.300 16:58:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.300 16:58:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.300 16:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.300 ************************************ 00:06:02.300 START TEST thread 00:06:02.300 ************************************ 00:06:02.300 16:58:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.559 * Looking for test storage... 00:06:02.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.559 16:58:26 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.559 16:58:26 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.559 16:58:26 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.559 16:58:26 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.559 16:58:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.559 16:58:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.559 16:58:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.559 16:58:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.559 16:58:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.559 16:58:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.559 16:58:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.559 16:58:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.559 16:58:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.559 16:58:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.559 16:58:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.559 16:58:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.559 16:58:26 thread -- scripts/common.sh@345 -- # : 1 00:06:02.559 16:58:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.559 16:58:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.559 16:58:26 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.559 16:58:26 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.559 16:58:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.559 16:58:26 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.559 16:58:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.559 16:58:26 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.559 16:58:26 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.559 16:58:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.559 16:58:26 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.559 16:58:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.559 16:58:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.559 16:58:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.559 16:58:26 thread -- scripts/common.sh@368 -- # return 0 00:06:02.559 16:58:26 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.560 --rc genhtml_branch_coverage=1 00:06:02.560 --rc genhtml_function_coverage=1 00:06:02.560 --rc genhtml_legend=1 00:06:02.560 --rc geninfo_all_blocks=1 00:06:02.560 --rc geninfo_unexecuted_blocks=1 00:06:02.560 00:06:02.560 ' 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.560 --rc genhtml_branch_coverage=1 00:06:02.560 --rc genhtml_function_coverage=1 00:06:02.560 --rc genhtml_legend=1 00:06:02.560 --rc geninfo_all_blocks=1 00:06:02.560 --rc geninfo_unexecuted_blocks=1 00:06:02.560 00:06:02.560 ' 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.560 --rc genhtml_branch_coverage=1 00:06:02.560 --rc genhtml_function_coverage=1 00:06:02.560 --rc genhtml_legend=1 00:06:02.560 --rc geninfo_all_blocks=1 00:06:02.560 --rc geninfo_unexecuted_blocks=1 00:06:02.560 00:06:02.560 ' 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.560 --rc genhtml_branch_coverage=1 00:06:02.560 --rc genhtml_function_coverage=1 00:06:02.560 --rc genhtml_legend=1 00:06:02.560 --rc geninfo_all_blocks=1 00:06:02.560 --rc geninfo_unexecuted_blocks=1 00:06:02.560 00:06:02.560 ' 00:06:02.560 16:58:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.560 16:58:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.560 ************************************ 00:06:02.560 START TEST thread_poller_perf 00:06:02.560 ************************************ 00:06:02.560 16:58:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.560 [2024-11-20 16:58:26.360302] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:02.560 [2024-11-20 16:58:26.360631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59506 ] 00:06:02.819 [2024-11-20 16:58:26.550562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.078 [2024-11-20 16:58:26.707219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.078 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.457 [2024-11-20T16:58:28.326Z] ====================================== 00:06:04.457 [2024-11-20T16:58:28.326Z] busy:2212422604 (cyc) 00:06:04.457 [2024-11-20T16:58:28.326Z] total_run_count: 322000 00:06:04.457 [2024-11-20T16:58:28.326Z] tsc_hz: 2200000000 (cyc) 00:06:04.457 [2024-11-20T16:58:28.326Z] ====================================== 00:06:04.457 [2024-11-20T16:58:28.326Z] poller_cost: 6870 (cyc), 3122 (nsec) 00:06:04.457 00:06:04.457 real 0m1.614s 00:06:04.457 user 0m1.398s 00:06:04.457 sys 0m0.106s 00:06:04.457 16:58:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.457 ************************************ 00:06:04.457 16:58:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 END TEST thread_poller_perf 00:06:04.457 ************************************ 00:06:04.457 16:58:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.457 16:58:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:04.457 16:58:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.457 16:58:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 ************************************ 00:06:04.457 START TEST thread_poller_perf 00:06:04.457 ************************************ 00:06:04.457 16:58:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.457 [2024-11-20 16:58:28.023559] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:04.457 [2024-11-20 16:58:28.023748] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59543 ] 00:06:04.457 [2024-11-20 16:58:28.210105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.716 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.716 [2024-11-20 16:58:28.336029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.093 [2024-11-20T16:58:29.962Z] ====================================== 00:06:06.093 [2024-11-20T16:58:29.962Z] busy:2203673656 (cyc) 00:06:06.093 [2024-11-20T16:58:29.962Z] total_run_count: 4194000 00:06:06.093 [2024-11-20T16:58:29.962Z] tsc_hz: 2200000000 (cyc) 00:06:06.093 [2024-11-20T16:58:29.962Z] ====================================== 00:06:06.093 [2024-11-20T16:58:29.962Z] poller_cost: 525 (cyc), 238 (nsec) 00:06:06.093 00:06:06.093 real 0m1.577s 00:06:06.093 user 0m1.360s 00:06:06.093 sys 0m0.108s 00:06:06.093 16:58:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.093 ************************************ 00:06:06.093 END TEST thread_poller_perf 00:06:06.093 ************************************ 00:06:06.093 16:58:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.093 16:58:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:06.093 ************************************ 00:06:06.093 END TEST thread 00:06:06.093 ************************************ 00:06:06.093 00:06:06.093 real 0m3.497s 00:06:06.093 user 0m2.928s 00:06:06.093 sys 0m0.344s 00:06:06.093 16:58:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.093 16:58:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.093 16:58:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:06.094 16:58:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.094 16:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.094 16:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.094 16:58:29 -- common/autotest_common.sh@10 -- # set +x 00:06:06.094 ************************************ 00:06:06.094 START TEST app_cmdline 00:06:06.094 ************************************ 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.094 * Looking for test storage... 00:06:06.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.094 16:58:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.094 --rc genhtml_branch_coverage=1 00:06:06.094 --rc genhtml_function_coverage=1 00:06:06.094 --rc genhtml_legend=1 00:06:06.094 --rc geninfo_all_blocks=1 00:06:06.094 --rc geninfo_unexecuted_blocks=1 00:06:06.094 00:06:06.094 ' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.094 --rc genhtml_branch_coverage=1 00:06:06.094 --rc genhtml_function_coverage=1 00:06:06.094 --rc genhtml_legend=1 00:06:06.094 --rc geninfo_all_blocks=1 00:06:06.094 --rc geninfo_unexecuted_blocks=1 00:06:06.094 00:06:06.094 ' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.094 --rc genhtml_branch_coverage=1 00:06:06.094 --rc genhtml_function_coverage=1 00:06:06.094 --rc genhtml_legend=1 00:06:06.094 --rc geninfo_all_blocks=1 00:06:06.094 --rc geninfo_unexecuted_blocks=1 00:06:06.094 00:06:06.094 ' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.094 --rc genhtml_branch_coverage=1 00:06:06.094 --rc genhtml_function_coverage=1 00:06:06.094 --rc genhtml_legend=1 00:06:06.094 --rc geninfo_all_blocks=1 00:06:06.094 --rc geninfo_unexecuted_blocks=1 00:06:06.094 00:06:06.094 ' 00:06:06.094 16:58:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.094 16:58:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59632 00:06:06.094 16:58:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.094 16:58:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59632 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59632 ']' 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.094 16:58:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.094 [2024-11-20 16:58:29.956650] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:06.094 [2024-11-20 16:58:29.957390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59632 ] 00:06:06.353 [2024-11-20 16:58:30.142212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.611 [2024-11-20 16:58:30.264897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.544 16:58:31 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.544 16:58:31 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:07.544 { 00:06:07.544 "version": "SPDK v25.01-pre git sha1 25916e30c", 00:06:07.544 "fields": { 00:06:07.544 "major": 25, 00:06:07.544 "minor": 1, 00:06:07.544 "patch": 0, 00:06:07.544 "suffix": "-pre", 00:06:07.544 "commit": "25916e30c" 00:06:07.544 } 00:06:07.544 } 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.544 16:58:31 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.544 16:58:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.544 16:58:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.544 16:58:31 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.802 16:58:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.802 16:58:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.802 16:58:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:07.802 16:58:31 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.062 request: 00:06:08.062 { 00:06:08.062 "method": "env_dpdk_get_mem_stats", 00:06:08.062 "req_id": 1 00:06:08.062 } 00:06:08.062 Got JSON-RPC error response 00:06:08.062 response: 00:06:08.062 { 00:06:08.062 "code": -32601, 00:06:08.062 "message": "Method not found" 00:06:08.062 } 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.062 16:58:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59632 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59632 ']' 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59632 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59632 00:06:08.062 killing process with pid 59632 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59632' 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 59632 00:06:08.062 16:58:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 59632 00:06:09.968 00:06:09.968 real 0m4.144s 00:06:09.968 user 0m4.646s 00:06:09.968 sys 0m0.639s 00:06:09.968 16:58:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.968 16:58:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.968 ************************************ 00:06:09.968 END TEST app_cmdline 00:06:09.968 ************************************ 00:06:09.968 16:58:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.968 16:58:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.968 16:58:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.968 16:58:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.228 ************************************ 00:06:10.228 START TEST version 00:06:10.228 ************************************ 00:06:10.228 16:58:33 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.228 * Looking for test storage... 00:06:10.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.228 16:58:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.228 16:58:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.228 16:58:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.228 16:58:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.228 16:58:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.228 16:58:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.228 16:58:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.228 16:58:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.228 16:58:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.228 16:58:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.228 16:58:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.228 16:58:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.228 16:58:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.228 16:58:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.228 16:58:34 version -- scripts/common.sh@344 -- # case "$op" in 00:06:10.228 16:58:34 version -- scripts/common.sh@345 -- # : 1 00:06:10.228 16:58:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.228 16:58:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.228 16:58:34 version -- scripts/common.sh@365 -- # decimal 1 00:06:10.228 16:58:34 version -- scripts/common.sh@353 -- # local d=1 00:06:10.228 16:58:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.228 16:58:34 version -- scripts/common.sh@355 -- # echo 1 00:06:10.228 16:58:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.228 16:58:34 version -- scripts/common.sh@366 -- # decimal 2 00:06:10.228 16:58:34 version -- scripts/common.sh@353 -- # local d=2 00:06:10.228 16:58:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.228 16:58:34 version -- scripts/common.sh@355 -- # echo 2 00:06:10.228 16:58:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.228 16:58:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.228 16:58:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.228 16:58:34 version -- scripts/common.sh@368 -- # return 0 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.228 --rc genhtml_branch_coverage=1 00:06:10.228 --rc genhtml_function_coverage=1 00:06:10.228 --rc genhtml_legend=1 00:06:10.228 --rc geninfo_all_blocks=1 00:06:10.228 --rc geninfo_unexecuted_blocks=1 00:06:10.228 00:06:10.228 ' 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.228 --rc genhtml_branch_coverage=1 00:06:10.228 --rc genhtml_function_coverage=1 00:06:10.228 --rc genhtml_legend=1 00:06:10.228 --rc geninfo_all_blocks=1 00:06:10.228 --rc geninfo_unexecuted_blocks=1 00:06:10.228 00:06:10.228 ' 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.228 --rc genhtml_branch_coverage=1 00:06:10.228 --rc genhtml_function_coverage=1 00:06:10.228 --rc genhtml_legend=1 00:06:10.228 --rc geninfo_all_blocks=1 00:06:10.228 --rc geninfo_unexecuted_blocks=1 00:06:10.228 00:06:10.228 ' 00:06:10.228 16:58:34 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.228 --rc genhtml_branch_coverage=1 00:06:10.228 --rc genhtml_function_coverage=1 00:06:10.228 --rc genhtml_legend=1 00:06:10.228 --rc geninfo_all_blocks=1 00:06:10.228 --rc geninfo_unexecuted_blocks=1 00:06:10.228 00:06:10.228 ' 00:06:10.228 16:58:34 version -- app/version.sh@17 -- # get_header_version major 00:06:10.228 16:58:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # cut -f2 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.228 16:58:34 version -- app/version.sh@17 -- # major=25 00:06:10.228 16:58:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.228 16:58:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # cut -f2 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.228 16:58:34 version -- app/version.sh@18 -- # minor=1 00:06:10.228 16:58:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # cut -f2 00:06:10.228 16:58:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.228 16:58:34 version -- app/version.sh@19 -- # patch=0 00:06:10.228 16:58:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.228 16:58:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # cut -f2 00:06:10.228 16:58:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.228 16:58:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.228 16:58:34 version -- app/version.sh@22 -- # version=25.1 00:06:10.228 16:58:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.228 16:58:34 version -- app/version.sh@28 -- # version=25.1rc0 00:06:10.228 16:58:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:10.228 16:58:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.488 16:58:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:10.488 16:58:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:10.488 00:06:10.488 real 0m0.261s 00:06:10.488 user 0m0.170s 00:06:10.488 sys 0m0.124s 00:06:10.488 16:58:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.488 16:58:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.488 ************************************ 00:06:10.488 END TEST version 00:06:10.488 ************************************ 00:06:10.488 16:58:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:10.488 16:58:34 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:10.488 16:58:34 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:10.488 16:58:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.488 16:58:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.488 16:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.488 ************************************ 00:06:10.488 START TEST bdev_raid 00:06:10.488 ************************************ 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:10.488 * Looking for test storage... 00:06:10.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.488 16:58:34 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.488 --rc genhtml_branch_coverage=1 00:06:10.488 --rc genhtml_function_coverage=1 00:06:10.488 --rc genhtml_legend=1 00:06:10.488 --rc geninfo_all_blocks=1 00:06:10.488 --rc geninfo_unexecuted_blocks=1 00:06:10.488 00:06:10.488 ' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.488 --rc genhtml_branch_coverage=1 00:06:10.488 --rc genhtml_function_coverage=1 00:06:10.488 --rc genhtml_legend=1 00:06:10.488 --rc geninfo_all_blocks=1 00:06:10.488 --rc geninfo_unexecuted_blocks=1 00:06:10.488 00:06:10.488 ' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.488 --rc genhtml_branch_coverage=1 00:06:10.488 --rc genhtml_function_coverage=1 00:06:10.488 --rc genhtml_legend=1 00:06:10.488 --rc geninfo_all_blocks=1 00:06:10.488 --rc geninfo_unexecuted_blocks=1 00:06:10.488 00:06:10.488 ' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.488 --rc genhtml_branch_coverage=1 00:06:10.488 --rc genhtml_function_coverage=1 00:06:10.488 --rc genhtml_legend=1 00:06:10.488 --rc geninfo_all_blocks=1 00:06:10.488 --rc geninfo_unexecuted_blocks=1 00:06:10.488 00:06:10.488 ' 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.488 16:58:34 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:10.488 16:58:34 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.488 16:58:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.489 16:58:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:10.489 ************************************ 00:06:10.489 START TEST raid1_resize_data_offset_test 00:06:10.489 ************************************ 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59816 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59816' 00:06:10.489 Process raid pid: 59816 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59816 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59816 ']' 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.489 16:58:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:10.748 [2024-11-20 16:58:34.456292] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:10.748 [2024-11-20 16:58:34.456745] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.008 [2024-11-20 16:58:34.640107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.008 [2024-11-20 16:58:34.746412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.267 [2024-11-20 16:58:34.935323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.267 [2024-11-20 16:58:34.935393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:11.526 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.526 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:11.526 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:11.526 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.526 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 malloc0 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 malloc1 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 null0 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 [2024-11-20 16:58:35.496356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:11.786 [2024-11-20 16:58:35.498807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:11.786 [2024-11-20 16:58:35.498903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:11.786 [2024-11-20 16:58:35.499085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:11.786 [2024-11-20 16:58:35.499105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:11.786 [2024-11-20 16:58:35.499489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:11.786 [2024-11-20 16:58:35.499722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:11.786 [2024-11-20 16:58:35.499755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:11.786 [2024-11-20 16:58:35.500035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.786 [2024-11-20 16:58:35.560431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.786 16:58:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.355 malloc2 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.355 [2024-11-20 16:58:36.064924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:12.355 [2024-11-20 16:58:36.080258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.355 [2024-11-20 16:58:36.082879] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59816 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59816 ']' 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59816 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59816 00:06:12.355 killing process with pid 59816 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.355 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.356 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59816' 00:06:12.356 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59816 00:06:12.356 16:58:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59816 00:06:12.356 [2024-11-20 16:58:36.174297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:12.356 [2024-11-20 16:58:36.174798] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:12.356 [2024-11-20 16:58:36.174893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:12.356 [2024-11-20 16:58:36.174920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:12.356 [2024-11-20 16:58:36.203450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:12.356 [2024-11-20 16:58:36.204036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:12.356 [2024-11-20 16:58:36.204068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:14.262 [2024-11-20 16:58:37.716415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:15.199 16:58:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:15.199 00:06:15.199 real 0m4.396s 00:06:15.199 user 0m4.283s 00:06:15.199 sys 0m0.605s 00:06:15.199 16:58:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.199 ************************************ 00:06:15.199 END TEST raid1_resize_data_offset_test 00:06:15.199 ************************************ 00:06:15.199 16:58:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.199 16:58:38 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:15.199 16:58:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.199 16:58:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.199 16:58:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:15.199 ************************************ 00:06:15.199 START TEST raid0_resize_superblock_test 00:06:15.199 ************************************ 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:15.199 Process raid pid: 59899 00:06:15.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59899 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59899' 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59899 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59899 ']' 00:06:15.199 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.200 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.200 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.200 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.200 16:58:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.200 [2024-11-20 16:58:38.893273] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:15.200 [2024-11-20 16:58:38.893415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.200 [2024-11-20 16:58:39.054775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.459 [2024-11-20 16:58:39.172990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.717 [2024-11-20 16:58:39.357181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.717 [2024-11-20 16:58:39.357232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.285 16:58:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.285 16:58:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:16.285 16:58:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:16.285 16:58:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.285 16:58:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.544 malloc0 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.544 [2024-11-20 16:58:40.344916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:16.544 [2024-11-20 16:58:40.345002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.544 [2024-11-20 16:58:40.345030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:16.544 [2024-11-20 16:58:40.345047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.544 [2024-11-20 16:58:40.347996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.544 [2024-11-20 16:58:40.348045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:16.544 pt0 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.544 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.803 9c5241fb-aef5-4186-b79a-e1af138db4bc 00:06:16.803 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.803 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:16.803 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 c0883b8b-e10f-406a-b474-d8c02a8ee677 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 c2365133-aa98-4c04-bc3f-4eb2cd74b04f 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 [2024-11-20 16:58:40.488468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c0883b8b-e10f-406a-b474-d8c02a8ee677 is claimed 00:06:16.804 [2024-11-20 16:58:40.488566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c2365133-aa98-4c04-bc3f-4eb2cd74b04f is claimed 00:06:16.804 [2024-11-20 16:58:40.488733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:16.804 [2024-11-20 16:58:40.488800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:16.804 [2024-11-20 16:58:40.489142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:16.804 [2024-11-20 16:58:40.489372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:16.804 [2024-11-20 16:58:40.489387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:16.804 [2024-11-20 16:58:40.489561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:16.804 [2024-11-20 16:58:40.608744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 [2024-11-20 16:58:40.660713] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.804 [2024-11-20 16:58:40.660747] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c0883b8b-e10f-406a-b474-d8c02a8ee677' was resized: old size 131072, new size 204800 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.804 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.804 [2024-11-20 16:58:40.668705] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.804 [2024-11-20 16:58:40.668736] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c2365133-aa98-4c04-bc3f-4eb2cd74b04f' was resized: old size 131072, new size 204800 00:06:16.804 [2024-11-20 16:58:40.668850] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 [2024-11-20 16:58:40.784672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 [2024-11-20 16:58:40.824483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:17.063 [2024-11-20 16:58:40.824558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:17.063 [2024-11-20 16:58:40.824578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:17.063 [2024-11-20 16:58:40.824594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:17.063 [2024-11-20 16:58:40.824707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:17.063 [2024-11-20 16:58:40.824748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:17.063 [2024-11-20 16:58:40.824812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 [2024-11-20 16:58:40.832420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:17.063 [2024-11-20 16:58:40.832478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.063 [2024-11-20 16:58:40.832501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:17.063 [2024-11-20 16:58:40.832515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.063 [2024-11-20 16:58:40.835092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.063 [2024-11-20 16:58:40.835159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:17.063 pt0 00:06:17.063 [2024-11-20 16:58:40.837413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c0883b8b-e10f-406a-b474-d8c02a8ee677 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 [2024-11-20 16:58:40.837477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c0883b8b-e10f-406a-b474-d8c02a8ee677 is claimed 00:06:17.063 [2024-11-20 16:58:40.837587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c2365133-aa98-4c04-bc3f-4eb2cd74b04f 00:06:17.063 [2024-11-20 16:58:40.837615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c2365133-aa98-4c04-bc3f-4eb2cd74b04f is claimed 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:17.063 [2024-11-20 16:58:40.837782] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c2365133-aa98-4c04-bc3f-4eb2cd74b04f (2) smaller than existing raid bdev Raid (3) 00:06:17.063 [2024-11-20 16:58:40.837832] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev c0883b8b-e10f-406a-b474-d8c02a8ee677: File exists 00:06:17.063 [2024-11-20 16:58:40.837881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:17.063 [2024-11-20 16:58:40.837898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:17.063 [2024-11-20 16:58:40.838323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 [2024-11-20 16:58:40.838674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:17.063 [2024-11-20 16:58:40.838691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:17.063 [2024-11-20 16:58:40.838921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:17.063 [2024-11-20 16:58:40.856680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59899 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59899 ']' 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59899 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.063 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59899 00:06:17.340 killing process with pid 59899 00:06:17.340 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.340 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.340 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59899' 00:06:17.340 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59899 00:06:17.340 [2024-11-20 16:58:40.938109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:17.340 16:58:40 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59899 00:06:17.340 [2024-11-20 16:58:40.938212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:17.340 [2024-11-20 16:58:40.938276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:17.340 [2024-11-20 16:58:40.938290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:18.290 [2024-11-20 16:58:42.100543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:19.225 16:58:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:19.225 00:06:19.225 real 0m4.234s 00:06:19.225 user 0m4.561s 00:06:19.225 sys 0m0.606s 00:06:19.225 16:58:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.225 16:58:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.225 ************************************ 00:06:19.225 END TEST raid0_resize_superblock_test 00:06:19.225 ************************************ 00:06:19.225 16:58:43 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:19.225 16:58:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.225 16:58:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.225 16:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:19.225 ************************************ 00:06:19.225 START TEST raid1_resize_superblock_test 00:06:19.225 ************************************ 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:19.225 Process raid pid: 59998 00:06:19.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59998 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59998' 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59998 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59998 ']' 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.225 16:58:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.484 [2024-11-20 16:58:43.193107] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:19.484 [2024-11-20 16:58:43.193515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.743 [2024-11-20 16:58:43.375591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.743 [2024-11-20 16:58:43.494391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.002 [2024-11-20 16:58:43.689835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.002 [2024-11-20 16:58:43.689955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.569 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.569 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:20.569 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:20.569 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.569 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.828 malloc0 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.828 [2024-11-20 16:58:44.617251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:20.828 [2024-11-20 16:58:44.617335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.828 [2024-11-20 16:58:44.617364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:20.828 [2024-11-20 16:58:44.617380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.828 [2024-11-20 16:58:44.620048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.828 [2024-11-20 16:58:44.620092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:20.828 pt0 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.828 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 a9124db3-1574-48a0-a0f2-ff97dee14c8e 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 5c08006e-1eac-4cbd-bc29-622b92d437d2 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 2db8d442-103d-49be-b406-e70513639cbf 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 [2024-11-20 16:58:44.761591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c08006e-1eac-4cbd-bc29-622b92d437d2 is claimed 00:06:21.087 [2024-11-20 16:58:44.761685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2db8d442-103d-49be-b406-e70513639cbf is claimed 00:06:21.087 [2024-11-20 16:58:44.761902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:21.087 [2024-11-20 16:58:44.761927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:21.087 [2024-11-20 16:58:44.762268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:21.087 [2024-11-20 16:58:44.762524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:21.087 [2024-11-20 16:58:44.762546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:21.087 [2024-11-20 16:58:44.762727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 [2024-11-20 16:58:44.881865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.087 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.087 [2024-11-20 16:58:44.937809] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.088 [2024-11-20 16:58:44.937982] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5c08006e-1eac-4cbd-bc29-622b92d437d2' was resized: old size 131072, new size 204800 00:06:21.088 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.088 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:21.088 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.088 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.088 [2024-11-20 16:58:44.949796] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.088 [2024-11-20 16:58:44.950005] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2db8d442-103d-49be-b406-e70513639cbf' was resized: old size 131072, new size 204800 00:06:21.088 [2024-11-20 16:58:44.950173] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 16:58:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 [2024-11-20 16:58:45.070007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 [2024-11-20 16:58:45.117698] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:21.347 [2024-11-20 16:58:45.117820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:21.347 [2024-11-20 16:58:45.117855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:21.347 [2024-11-20 16:58:45.118018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:21.347 [2024-11-20 16:58:45.118321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.347 [2024-11-20 16:58:45.118416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.347 [2024-11-20 16:58:45.118437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 [2024-11-20 16:58:45.125640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.347 [2024-11-20 16:58:45.125725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.347 [2024-11-20 16:58:45.125765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:21.347 [2024-11-20 16:58:45.125815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.347 [2024-11-20 16:58:45.129021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.347 [2024-11-20 16:58:45.129069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.347 pt0 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 [2024-11-20 16:58:45.131447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5c08006e-1eac-4cbd-bc29-622b92d437d2 00:06:21.347 [2024-11-20 16:58:45.131527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5c08006e-1eac-4cbd-bc29-622b92d437d2 is claimed 00:06:21.347 [2024-11-20 16:58:45.131684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2db8d442-103d-49be-b406-e70513639cbf 00:06:21.347 [2024-11-20 16:58:45.131728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2db8d442-103d-49be-b406-e70513639cbf is claimed 00:06:21.347 [2024-11-20 16:58:45.131935] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2db8d442-103d-49be-b406-e70513639cbf (2) smaller than existing raid bdev Raid (3) 00:06:21.347 [2024-11-20 16:58:45.131977] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5c08006e-1eac-4cbd-bc29-622b92d437d2: File exists 00:06:21.347 [2024-11-20 16:58:45.132032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:21.347 [2024-11-20 16:58:45.132051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:21.347 [2024-11-20 16:58:45.132372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:21.347 [2024-11-20 16:58:45.132615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:21.347 [2024-11-20 16:58:45.132644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:21.347 [2024-11-20 16:58:45.132844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.347 [2024-11-20 16:58:45.146003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.347 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59998 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59998 ']' 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59998 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.348 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59998 00:06:21.608 killing process with pid 59998 00:06:21.608 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.608 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.608 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59998' 00:06:21.608 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59998 00:06:21.608 [2024-11-20 16:58:45.224070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:21.608 16:58:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59998 00:06:21.608 [2024-11-20 16:58:45.224210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.608 [2024-11-20 16:58:45.224278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.608 [2024-11-20 16:58:45.224290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:22.546 [2024-11-20 16:58:46.397239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.482 16:58:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:23.482 00:06:23.482 real 0m4.255s 00:06:23.482 user 0m4.574s 00:06:23.482 sys 0m0.612s 00:06:23.482 16:58:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.482 16:58:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.482 ************************************ 00:06:23.482 END TEST raid1_resize_superblock_test 00:06:23.482 ************************************ 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:23.741 16:58:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:23.741 16:58:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.741 16:58:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.741 16:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.741 ************************************ 00:06:23.741 START TEST raid_function_test_raid0 00:06:23.741 ************************************ 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:23.741 Process raid pid: 60095 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60095 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60095' 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60095 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60095 ']' 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.741 16:58:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:23.741 [2024-11-20 16:58:47.512665] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:23.741 [2024-11-20 16:58:47.513457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.000 [2024-11-20 16:58:47.694644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.000 [2024-11-20 16:58:47.814135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.258 [2024-11-20 16:58:48.006564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.258 [2024-11-20 16:58:48.006605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 Base_1 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 Base_2 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 [2024-11-20 16:58:48.589638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:24.828 [2024-11-20 16:58:48.592259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:24.828 [2024-11-20 16:58:48.592337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:24.828 [2024-11-20 16:58:48.592359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:24.828 [2024-11-20 16:58:48.592608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:24.828 [2024-11-20 16:58:48.592786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:24.828 [2024-11-20 16:58:48.592800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:24.828 [2024-11-20 16:58:48.592996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:24.828 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:25.087 [2024-11-20 16:58:48.865759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:25.087 /dev/nbd0 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.087 1+0 records in 00:06:25.087 1+0 records out 00:06:25.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335197 s, 12.2 MB/s 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:25.087 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.088 16:58:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.720 { 00:06:25.720 "nbd_device": "/dev/nbd0", 00:06:25.720 "bdev_name": "raid" 00:06:25.720 } 00:06:25.720 ]' 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.720 { 00:06:25.720 "nbd_device": "/dev/nbd0", 00:06:25.720 "bdev_name": "raid" 00:06:25.720 } 00:06:25.720 ]' 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:25.720 4096+0 records in 00:06:25.720 4096+0 records out 00:06:25.720 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0299093 s, 70.1 MB/s 00:06:25.720 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:25.992 4096+0 records in 00:06:25.992 4096+0 records out 00:06:25.992 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.298402 s, 7.0 MB/s 00:06:25.992 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:25.992 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.992 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:25.993 128+0 records in 00:06:25.993 128+0 records out 00:06:25.993 65536 bytes (66 kB, 64 KiB) copied, 0.000831605 s, 78.8 MB/s 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:25.993 2035+0 records in 00:06:25.993 2035+0 records out 00:06:25.993 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125948 s, 82.7 MB/s 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:25.993 456+0 records in 00:06:25.993 456+0 records out 00:06:25.993 233472 bytes (233 kB, 228 KiB) copied, 0.00410785 s, 56.8 MB/s 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.993 16:58:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:26.252 [2024-11-20 16:58:50.060381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.252 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60095 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60095 ']' 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60095 00:06:26.510 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60095 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60095' 00:06:26.769 killing process with pid 60095 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60095 00:06:26.769 [2024-11-20 16:58:50.408022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:26.769 16:58:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60095 00:06:26.769 [2024-11-20 16:58:50.408138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.769 [2024-11-20 16:58:50.408207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.769 [2024-11-20 16:58:50.408254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:26.769 [2024-11-20 16:58:50.569114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.704 16:58:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:27.704 00:06:27.704 real 0m4.095s 00:06:27.704 user 0m5.018s 00:06:27.704 sys 0m1.015s 00:06:27.704 16:58:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.704 16:58:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:27.704 ************************************ 00:06:27.704 END TEST raid_function_test_raid0 00:06:27.704 ************************************ 00:06:27.704 16:58:51 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:27.704 16:58:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.705 16:58:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.705 16:58:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.705 ************************************ 00:06:27.705 START TEST raid_function_test_concat 00:06:27.705 ************************************ 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60224 00:06:27.705 Process raid pid: 60224 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60224' 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60224 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60224 ']' 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.705 16:58:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.963 [2024-11-20 16:58:51.656278] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:27.963 [2024-11-20 16:58:51.656468] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.222 [2024-11-20 16:58:51.838302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.222 [2024-11-20 16:58:51.956746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.480 [2024-11-20 16:58:52.142666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.480 [2024-11-20 16:58:52.142719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.048 Base_1 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.048 Base_2 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.048 [2024-11-20 16:58:52.743796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:29.048 [2024-11-20 16:58:52.746216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:29.048 [2024-11-20 16:58:52.746317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:29.048 [2024-11-20 16:58:52.746336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:29.048 [2024-11-20 16:58:52.746664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:29.048 [2024-11-20 16:58:52.746875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:29.048 [2024-11-20 16:58:52.746895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:29.048 [2024-11-20 16:58:52.747085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:29.048 16:58:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:29.307 [2024-11-20 16:58:53.087988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:29.307 /dev/nbd0 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.307 1+0 records in 00:06:29.307 1+0 records out 00:06:29.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404177 s, 10.1 MB/s 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:29.307 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:29.565 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.565 { 00:06:29.565 "nbd_device": "/dev/nbd0", 00:06:29.565 "bdev_name": "raid" 00:06:29.565 } 00:06:29.565 ]' 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.824 { 00:06:29.824 "nbd_device": "/dev/nbd0", 00:06:29.824 "bdev_name": "raid" 00:06:29.824 } 00:06:29.824 ]' 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:29.824 4096+0 records in 00:06:29.824 4096+0 records out 00:06:29.824 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0307668 s, 68.2 MB/s 00:06:29.824 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:30.084 4096+0 records in 00:06:30.084 4096+0 records out 00:06:30.084 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.300949 s, 7.0 MB/s 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:30.084 128+0 records in 00:06:30.084 128+0 records out 00:06:30.084 65536 bytes (66 kB, 64 KiB) copied, 0.000483008 s, 136 MB/s 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:30.084 2035+0 records in 00:06:30.084 2035+0 records out 00:06:30.084 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00639754 s, 163 MB/s 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:30.084 456+0 records in 00:06:30.084 456+0 records out 00:06:30.084 233472 bytes (233 kB, 228 KiB) copied, 0.0019265 s, 121 MB/s 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.084 16:58:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.343 [2024-11-20 16:58:54.178757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:30.343 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60224 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60224 ']' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60224 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60224 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.911 killing process with pid 60224 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60224' 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60224 00:06:30.911 [2024-11-20 16:58:54.562705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.911 16:58:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60224 00:06:30.911 [2024-11-20 16:58:54.562845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.911 [2024-11-20 16:58:54.562916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.911 [2024-11-20 16:58:54.562935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:30.911 [2024-11-20 16:58:54.721307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:31.847 16:58:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:31.847 00:06:31.847 real 0m4.136s 00:06:31.847 user 0m5.076s 00:06:31.847 sys 0m0.994s 00:06:31.847 16:58:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.847 16:58:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:31.847 ************************************ 00:06:31.847 END TEST raid_function_test_concat 00:06:31.847 ************************************ 00:06:32.106 16:58:55 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:32.106 16:58:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.106 16:58:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.106 16:58:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.106 ************************************ 00:06:32.106 START TEST raid0_resize_test 00:06:32.106 ************************************ 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60353 00:06:32.106 Process raid pid: 60353 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60353' 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60353 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60353 ']' 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.106 16:58:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.106 [2024-11-20 16:58:55.852950] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:32.106 [2024-11-20 16:58:55.853626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.365 [2024-11-20 16:58:56.035214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.365 [2024-11-20 16:58:56.146160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.623 [2024-11-20 16:58:56.340612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.623 [2024-11-20 16:58:56.340685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.189 Base_1 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.189 Base_2 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.189 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.189 [2024-11-20 16:58:56.839599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:33.189 [2024-11-20 16:58:56.841900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:33.190 [2024-11-20 16:58:56.841978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:33.190 [2024-11-20 16:58:56.841995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:33.190 [2024-11-20 16:58:56.842315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:33.190 [2024-11-20 16:58:56.842468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:33.190 [2024-11-20 16:58:56.842489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:33.190 [2024-11-20 16:58:56.842652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.190 [2024-11-20 16:58:56.847536] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:33.190 [2024-11-20 16:58:56.847586] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:33.190 true 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.190 [2024-11-20 16:58:56.859800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.190 [2024-11-20 16:58:56.911658] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:33.190 [2024-11-20 16:58:56.911705] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:33.190 [2024-11-20 16:58:56.911754] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:33.190 true 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.190 [2024-11-20 16:58:56.923882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60353 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60353 ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60353 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.190 16:58:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60353 00:06:33.190 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.190 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.190 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60353' 00:06:33.190 killing process with pid 60353 00:06:33.190 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60353 00:06:33.190 [2024-11-20 16:58:57.005626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.190 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60353 00:06:33.190 [2024-11-20 16:58:57.005730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.190 [2024-11-20 16:58:57.005811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.190 [2024-11-20 16:58:57.005825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:33.190 [2024-11-20 16:58:57.020812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.124 16:58:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:34.124 00:06:34.124 real 0m2.219s 00:06:34.124 user 0m2.437s 00:06:34.124 sys 0m0.409s 00:06:34.124 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.124 16:58:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.124 ************************************ 00:06:34.124 END TEST raid0_resize_test 00:06:34.124 ************************************ 00:06:34.383 16:58:58 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:34.383 16:58:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.383 16:58:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.383 16:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.383 ************************************ 00:06:34.383 START TEST raid1_resize_test 00:06:34.383 ************************************ 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60409 00:06:34.383 Process raid pid: 60409 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60409' 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60409 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60409 ']' 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.383 16:58:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.383 [2024-11-20 16:58:58.129785] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:34.383 [2024-11-20 16:58:58.129978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.642 [2024-11-20 16:58:58.315799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.642 [2024-11-20 16:58:58.436680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.901 [2024-11-20 16:58:58.616190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.901 [2024-11-20 16:58:58.616257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.469 Base_1 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.469 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 Base_2 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 [2024-11-20 16:58:59.114190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:35.470 [2024-11-20 16:58:59.116617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:35.470 [2024-11-20 16:58:59.116711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:35.470 [2024-11-20 16:58:59.116730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:35.470 [2024-11-20 16:58:59.117114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:35.470 [2024-11-20 16:58:59.117300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:35.470 [2024-11-20 16:58:59.117323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:35.470 [2024-11-20 16:58:59.117506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 [2024-11-20 16:58:59.122178] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.470 [2024-11-20 16:58:59.122230] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:35.470 true 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 [2024-11-20 16:58:59.134340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 [2024-11-20 16:58:59.178216] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.470 [2024-11-20 16:58:59.178251] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:35.470 [2024-11-20 16:58:59.178299] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:35.470 true 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 [2024-11-20 16:58:59.190383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60409 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60409 ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60409 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60409 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.470 killing process with pid 60409 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60409' 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60409 00:06:35.470 [2024-11-20 16:58:59.268587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.470 [2024-11-20 16:58:59.268668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.470 16:58:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60409 00:06:35.470 [2024-11-20 16:58:59.269322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.470 [2024-11-20 16:58:59.269353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.470 [2024-11-20 16:58:59.283855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.408 16:59:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:36.408 00:06:36.408 real 0m2.150s 00:06:36.408 user 0m2.401s 00:06:36.408 sys 0m0.364s 00:06:36.408 16:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.408 16:59:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.408 ************************************ 00:06:36.408 END TEST raid1_resize_test 00:06:36.408 ************************************ 00:06:36.408 16:59:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:36.408 16:59:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:36.408 16:59:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:36.408 16:59:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:36.408 16:59:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.408 16:59:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.408 ************************************ 00:06:36.408 START TEST raid_state_function_test 00:06:36.408 ************************************ 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60466 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.408 Process raid pid: 60466 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60466' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60466 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60466 ']' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.408 16:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.668 [2024-11-20 16:59:00.332986] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:36.668 [2024-11-20 16:59:00.333255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.668 [2024-11-20 16:59:00.517130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.927 [2024-11-20 16:59:00.636666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.187 [2024-11-20 16:59:00.825498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.187 [2024-11-20 16:59:00.825582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.446 [2024-11-20 16:59:01.274783] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.446 [2024-11-20 16:59:01.274871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.446 [2024-11-20 16:59:01.274890] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.446 [2024-11-20 16:59:01.274907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.446 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.705 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.705 "name": "Existed_Raid", 00:06:37.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.705 "strip_size_kb": 64, 00:06:37.705 "state": "configuring", 00:06:37.705 "raid_level": "raid0", 00:06:37.705 "superblock": false, 00:06:37.705 "num_base_bdevs": 2, 00:06:37.705 "num_base_bdevs_discovered": 0, 00:06:37.705 "num_base_bdevs_operational": 2, 00:06:37.705 "base_bdevs_list": [ 00:06:37.705 { 00:06:37.705 "name": "BaseBdev1", 00:06:37.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.705 "is_configured": false, 00:06:37.705 "data_offset": 0, 00:06:37.705 "data_size": 0 00:06:37.705 }, 00:06:37.705 { 00:06:37.705 "name": "BaseBdev2", 00:06:37.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.705 "is_configured": false, 00:06:37.705 "data_offset": 0, 00:06:37.706 "data_size": 0 00:06:37.706 } 00:06:37.706 ] 00:06:37.706 }' 00:06:37.706 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.706 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.965 [2024-11-20 16:59:01.774896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.965 [2024-11-20 16:59:01.774937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.965 [2024-11-20 16:59:01.786840] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.965 [2024-11-20 16:59:01.786901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.965 [2024-11-20 16:59:01.786933] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.965 [2024-11-20 16:59:01.786952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.965 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.965 [2024-11-20 16:59:01.829911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.965 BaseBdev1 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.225 [ 00:06:38.225 { 00:06:38.225 "name": "BaseBdev1", 00:06:38.225 "aliases": [ 00:06:38.225 "6d71277f-1902-4ae1-9d85-f53d33053016" 00:06:38.225 ], 00:06:38.225 "product_name": "Malloc disk", 00:06:38.225 "block_size": 512, 00:06:38.225 "num_blocks": 65536, 00:06:38.225 "uuid": "6d71277f-1902-4ae1-9d85-f53d33053016", 00:06:38.225 "assigned_rate_limits": { 00:06:38.225 "rw_ios_per_sec": 0, 00:06:38.225 "rw_mbytes_per_sec": 0, 00:06:38.225 "r_mbytes_per_sec": 0, 00:06:38.225 "w_mbytes_per_sec": 0 00:06:38.225 }, 00:06:38.225 "claimed": true, 00:06:38.225 "claim_type": "exclusive_write", 00:06:38.225 "zoned": false, 00:06:38.225 "supported_io_types": { 00:06:38.225 "read": true, 00:06:38.225 "write": true, 00:06:38.225 "unmap": true, 00:06:38.225 "flush": true, 00:06:38.225 "reset": true, 00:06:38.225 "nvme_admin": false, 00:06:38.225 "nvme_io": false, 00:06:38.225 "nvme_io_md": false, 00:06:38.225 "write_zeroes": true, 00:06:38.225 "zcopy": true, 00:06:38.225 "get_zone_info": false, 00:06:38.225 "zone_management": false, 00:06:38.225 "zone_append": false, 00:06:38.225 "compare": false, 00:06:38.225 "compare_and_write": false, 00:06:38.225 "abort": true, 00:06:38.225 "seek_hole": false, 00:06:38.225 "seek_data": false, 00:06:38.225 "copy": true, 00:06:38.225 "nvme_iov_md": false 00:06:38.225 }, 00:06:38.225 "memory_domains": [ 00:06:38.225 { 00:06:38.225 "dma_device_id": "system", 00:06:38.225 "dma_device_type": 1 00:06:38.225 }, 00:06:38.225 { 00:06:38.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.225 "dma_device_type": 2 00:06:38.225 } 00:06:38.225 ], 00:06:38.225 "driver_specific": {} 00:06:38.225 } 00:06:38.225 ] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.225 "name": "Existed_Raid", 00:06:38.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.225 "strip_size_kb": 64, 00:06:38.225 "state": "configuring", 00:06:38.225 "raid_level": "raid0", 00:06:38.225 "superblock": false, 00:06:38.225 "num_base_bdevs": 2, 00:06:38.225 "num_base_bdevs_discovered": 1, 00:06:38.225 "num_base_bdevs_operational": 2, 00:06:38.225 "base_bdevs_list": [ 00:06:38.225 { 00:06:38.225 "name": "BaseBdev1", 00:06:38.225 "uuid": "6d71277f-1902-4ae1-9d85-f53d33053016", 00:06:38.225 "is_configured": true, 00:06:38.225 "data_offset": 0, 00:06:38.225 "data_size": 65536 00:06:38.225 }, 00:06:38.225 { 00:06:38.225 "name": "BaseBdev2", 00:06:38.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.225 "is_configured": false, 00:06:38.225 "data_offset": 0, 00:06:38.225 "data_size": 0 00:06:38.225 } 00:06:38.225 ] 00:06:38.225 }' 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.225 16:59:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.485 [2024-11-20 16:59:02.322095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:38.485 [2024-11-20 16:59:02.322182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.485 [2024-11-20 16:59:02.330136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:38.485 [2024-11-20 16:59:02.332760] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:38.485 [2024-11-20 16:59:02.332870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.485 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.743 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.743 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.743 "name": "Existed_Raid", 00:06:38.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.743 "strip_size_kb": 64, 00:06:38.743 "state": "configuring", 00:06:38.743 "raid_level": "raid0", 00:06:38.743 "superblock": false, 00:06:38.743 "num_base_bdevs": 2, 00:06:38.743 "num_base_bdevs_discovered": 1, 00:06:38.744 "num_base_bdevs_operational": 2, 00:06:38.744 "base_bdevs_list": [ 00:06:38.744 { 00:06:38.744 "name": "BaseBdev1", 00:06:38.744 "uuid": "6d71277f-1902-4ae1-9d85-f53d33053016", 00:06:38.744 "is_configured": true, 00:06:38.744 "data_offset": 0, 00:06:38.744 "data_size": 65536 00:06:38.744 }, 00:06:38.744 { 00:06:38.744 "name": "BaseBdev2", 00:06:38.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.744 "is_configured": false, 00:06:38.744 "data_offset": 0, 00:06:38.744 "data_size": 0 00:06:38.744 } 00:06:38.744 ] 00:06:38.744 }' 00:06:38.744 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.744 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.003 [2024-11-20 16:59:02.850781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:39.003 [2024-11-20 16:59:02.850875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:39.003 [2024-11-20 16:59:02.850888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:39.003 [2024-11-20 16:59:02.851212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:39.003 [2024-11-20 16:59:02.851483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:39.003 [2024-11-20 16:59:02.851511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:39.003 [2024-11-20 16:59:02.851910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.003 BaseBdev2 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.003 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.262 [ 00:06:39.262 { 00:06:39.262 "name": "BaseBdev2", 00:06:39.262 "aliases": [ 00:06:39.262 "5420a976-0a9a-4f26-b2bb-77f406f22789" 00:06:39.262 ], 00:06:39.262 "product_name": "Malloc disk", 00:06:39.262 "block_size": 512, 00:06:39.262 "num_blocks": 65536, 00:06:39.262 "uuid": "5420a976-0a9a-4f26-b2bb-77f406f22789", 00:06:39.262 "assigned_rate_limits": { 00:06:39.262 "rw_ios_per_sec": 0, 00:06:39.262 "rw_mbytes_per_sec": 0, 00:06:39.262 "r_mbytes_per_sec": 0, 00:06:39.262 "w_mbytes_per_sec": 0 00:06:39.262 }, 00:06:39.262 "claimed": true, 00:06:39.262 "claim_type": "exclusive_write", 00:06:39.262 "zoned": false, 00:06:39.262 "supported_io_types": { 00:06:39.262 "read": true, 00:06:39.262 "write": true, 00:06:39.262 "unmap": true, 00:06:39.262 "flush": true, 00:06:39.262 "reset": true, 00:06:39.262 "nvme_admin": false, 00:06:39.262 "nvme_io": false, 00:06:39.262 "nvme_io_md": false, 00:06:39.262 "write_zeroes": true, 00:06:39.262 "zcopy": true, 00:06:39.262 "get_zone_info": false, 00:06:39.262 "zone_management": false, 00:06:39.262 "zone_append": false, 00:06:39.262 "compare": false, 00:06:39.262 "compare_and_write": false, 00:06:39.262 "abort": true, 00:06:39.262 "seek_hole": false, 00:06:39.262 "seek_data": false, 00:06:39.262 "copy": true, 00:06:39.262 "nvme_iov_md": false 00:06:39.262 }, 00:06:39.262 "memory_domains": [ 00:06:39.262 { 00:06:39.262 "dma_device_id": "system", 00:06:39.262 "dma_device_type": 1 00:06:39.262 }, 00:06:39.262 { 00:06:39.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.262 "dma_device_type": 2 00:06:39.262 } 00:06:39.262 ], 00:06:39.262 "driver_specific": {} 00:06:39.262 } 00:06:39.262 ] 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.262 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.262 "name": "Existed_Raid", 00:06:39.262 "uuid": "f88693ca-0aa5-45ff-b6d1-93e2194b0d92", 00:06:39.262 "strip_size_kb": 64, 00:06:39.262 "state": "online", 00:06:39.262 "raid_level": "raid0", 00:06:39.262 "superblock": false, 00:06:39.262 "num_base_bdevs": 2, 00:06:39.262 "num_base_bdevs_discovered": 2, 00:06:39.262 "num_base_bdevs_operational": 2, 00:06:39.262 "base_bdevs_list": [ 00:06:39.262 { 00:06:39.262 "name": "BaseBdev1", 00:06:39.262 "uuid": "6d71277f-1902-4ae1-9d85-f53d33053016", 00:06:39.262 "is_configured": true, 00:06:39.262 "data_offset": 0, 00:06:39.262 "data_size": 65536 00:06:39.262 }, 00:06:39.262 { 00:06:39.262 "name": "BaseBdev2", 00:06:39.262 "uuid": "5420a976-0a9a-4f26-b2bb-77f406f22789", 00:06:39.295 "is_configured": true, 00:06:39.295 "data_offset": 0, 00:06:39.295 "data_size": 65536 00:06:39.295 } 00:06:39.295 ] 00:06:39.295 }' 00:06:39.295 16:59:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.295 16:59:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:39.555 [2024-11-20 16:59:03.395401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.555 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:39.814 "name": "Existed_Raid", 00:06:39.814 "aliases": [ 00:06:39.814 "f88693ca-0aa5-45ff-b6d1-93e2194b0d92" 00:06:39.814 ], 00:06:39.814 "product_name": "Raid Volume", 00:06:39.814 "block_size": 512, 00:06:39.814 "num_blocks": 131072, 00:06:39.814 "uuid": "f88693ca-0aa5-45ff-b6d1-93e2194b0d92", 00:06:39.814 "assigned_rate_limits": { 00:06:39.814 "rw_ios_per_sec": 0, 00:06:39.814 "rw_mbytes_per_sec": 0, 00:06:39.814 "r_mbytes_per_sec": 0, 00:06:39.814 "w_mbytes_per_sec": 0 00:06:39.814 }, 00:06:39.814 "claimed": false, 00:06:39.814 "zoned": false, 00:06:39.814 "supported_io_types": { 00:06:39.814 "read": true, 00:06:39.814 "write": true, 00:06:39.814 "unmap": true, 00:06:39.814 "flush": true, 00:06:39.814 "reset": true, 00:06:39.814 "nvme_admin": false, 00:06:39.814 "nvme_io": false, 00:06:39.814 "nvme_io_md": false, 00:06:39.814 "write_zeroes": true, 00:06:39.814 "zcopy": false, 00:06:39.814 "get_zone_info": false, 00:06:39.814 "zone_management": false, 00:06:39.814 "zone_append": false, 00:06:39.814 "compare": false, 00:06:39.814 "compare_and_write": false, 00:06:39.814 "abort": false, 00:06:39.814 "seek_hole": false, 00:06:39.814 "seek_data": false, 00:06:39.814 "copy": false, 00:06:39.814 "nvme_iov_md": false 00:06:39.814 }, 00:06:39.814 "memory_domains": [ 00:06:39.814 { 00:06:39.814 "dma_device_id": "system", 00:06:39.814 "dma_device_type": 1 00:06:39.814 }, 00:06:39.814 { 00:06:39.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.814 "dma_device_type": 2 00:06:39.814 }, 00:06:39.814 { 00:06:39.814 "dma_device_id": "system", 00:06:39.814 "dma_device_type": 1 00:06:39.814 }, 00:06:39.814 { 00:06:39.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.814 "dma_device_type": 2 00:06:39.814 } 00:06:39.814 ], 00:06:39.814 "driver_specific": { 00:06:39.814 "raid": { 00:06:39.814 "uuid": "f88693ca-0aa5-45ff-b6d1-93e2194b0d92", 00:06:39.814 "strip_size_kb": 64, 00:06:39.814 "state": "online", 00:06:39.814 "raid_level": "raid0", 00:06:39.814 "superblock": false, 00:06:39.814 "num_base_bdevs": 2, 00:06:39.814 "num_base_bdevs_discovered": 2, 00:06:39.814 "num_base_bdevs_operational": 2, 00:06:39.814 "base_bdevs_list": [ 00:06:39.814 { 00:06:39.814 "name": "BaseBdev1", 00:06:39.814 "uuid": "6d71277f-1902-4ae1-9d85-f53d33053016", 00:06:39.814 "is_configured": true, 00:06:39.814 "data_offset": 0, 00:06:39.814 "data_size": 65536 00:06:39.814 }, 00:06:39.814 { 00:06:39.814 "name": "BaseBdev2", 00:06:39.814 "uuid": "5420a976-0a9a-4f26-b2bb-77f406f22789", 00:06:39.814 "is_configured": true, 00:06:39.814 "data_offset": 0, 00:06:39.814 "data_size": 65536 00:06:39.814 } 00:06:39.814 ] 00:06:39.814 } 00:06:39.814 } 00:06:39.814 }' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:39.814 BaseBdev2' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.814 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.814 [2024-11-20 16:59:03.659140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:39.814 [2024-11-20 16:59:03.659193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:39.814 [2024-11-20 16:59:03.659296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.073 "name": "Existed_Raid", 00:06:40.073 "uuid": "f88693ca-0aa5-45ff-b6d1-93e2194b0d92", 00:06:40.073 "strip_size_kb": 64, 00:06:40.073 "state": "offline", 00:06:40.073 "raid_level": "raid0", 00:06:40.073 "superblock": false, 00:06:40.073 "num_base_bdevs": 2, 00:06:40.073 "num_base_bdevs_discovered": 1, 00:06:40.073 "num_base_bdevs_operational": 1, 00:06:40.073 "base_bdevs_list": [ 00:06:40.073 { 00:06:40.073 "name": null, 00:06:40.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:40.073 "is_configured": false, 00:06:40.073 "data_offset": 0, 00:06:40.073 "data_size": 65536 00:06:40.073 }, 00:06:40.073 { 00:06:40.073 "name": "BaseBdev2", 00:06:40.073 "uuid": "5420a976-0a9a-4f26-b2bb-77f406f22789", 00:06:40.073 "is_configured": true, 00:06:40.073 "data_offset": 0, 00:06:40.073 "data_size": 65536 00:06:40.073 } 00:06:40.073 ] 00:06:40.073 }' 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.073 16:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.643 [2024-11-20 16:59:04.315314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:40.643 [2024-11-20 16:59:04.315524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60466 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60466 ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60466 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60466 00:06:40.643 killing process with pid 60466 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60466' 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60466 00:06:40.643 [2024-11-20 16:59:04.486292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.643 16:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60466 00:06:40.643 [2024-11-20 16:59:04.500592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.581 16:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:41.581 00:06:41.581 real 0m5.207s 00:06:41.581 user 0m7.892s 00:06:41.581 sys 0m0.763s 00:06:41.581 16:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.581 ************************************ 00:06:41.581 END TEST raid_state_function_test 00:06:41.581 ************************************ 00:06:41.581 16:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.840 16:59:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:41.840 16:59:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:41.840 16:59:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.840 16:59:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.840 ************************************ 00:06:41.840 START TEST raid_state_function_test_sb 00:06:41.840 ************************************ 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:41.840 Process raid pid: 60725 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60725 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60725' 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60725 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60725 ']' 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.840 16:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:41.840 [2024-11-20 16:59:05.594621] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:41.840 [2024-11-20 16:59:05.595207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.100 [2024-11-20 16:59:05.779898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.100 [2024-11-20 16:59:05.888187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.359 [2024-11-20 16:59:06.066674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.359 [2024-11-20 16:59:06.066724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.928 [2024-11-20 16:59:06.539903] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:42.928 [2024-11-20 16:59:06.539963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:42.928 [2024-11-20 16:59:06.539981] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:42.928 [2024-11-20 16:59:06.539997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.928 "name": "Existed_Raid", 00:06:42.928 "uuid": "414bedd4-876e-4742-9076-d53a871f3224", 00:06:42.928 "strip_size_kb": 64, 00:06:42.928 "state": "configuring", 00:06:42.928 "raid_level": "raid0", 00:06:42.928 "superblock": true, 00:06:42.928 "num_base_bdevs": 2, 00:06:42.928 "num_base_bdevs_discovered": 0, 00:06:42.928 "num_base_bdevs_operational": 2, 00:06:42.928 "base_bdevs_list": [ 00:06:42.928 { 00:06:42.928 "name": "BaseBdev1", 00:06:42.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.928 "is_configured": false, 00:06:42.928 "data_offset": 0, 00:06:42.928 "data_size": 0 00:06:42.928 }, 00:06:42.928 { 00:06:42.928 "name": "BaseBdev2", 00:06:42.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.928 "is_configured": false, 00:06:42.928 "data_offset": 0, 00:06:42.928 "data_size": 0 00:06:42.928 } 00:06:42.928 ] 00:06:42.928 }' 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.928 16:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.188 [2024-11-20 16:59:07.032006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:43.188 [2024-11-20 16:59:07.032045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.188 [2024-11-20 16:59:07.040011] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:43.188 [2024-11-20 16:59:07.040081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:43.188 [2024-11-20 16:59:07.040125] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:43.188 [2024-11-20 16:59:07.040158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.188 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 [2024-11-20 16:59:07.080708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:43.466 BaseBdev1 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.466 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.466 [ 00:06:43.466 { 00:06:43.466 "name": "BaseBdev1", 00:06:43.466 "aliases": [ 00:06:43.466 "94f41c06-df91-4440-9b43-1f5a3e91c570" 00:06:43.466 ], 00:06:43.466 "product_name": "Malloc disk", 00:06:43.466 "block_size": 512, 00:06:43.466 "num_blocks": 65536, 00:06:43.466 "uuid": "94f41c06-df91-4440-9b43-1f5a3e91c570", 00:06:43.466 "assigned_rate_limits": { 00:06:43.466 "rw_ios_per_sec": 0, 00:06:43.466 "rw_mbytes_per_sec": 0, 00:06:43.466 "r_mbytes_per_sec": 0, 00:06:43.466 "w_mbytes_per_sec": 0 00:06:43.466 }, 00:06:43.466 "claimed": true, 00:06:43.466 "claim_type": "exclusive_write", 00:06:43.466 "zoned": false, 00:06:43.466 "supported_io_types": { 00:06:43.466 "read": true, 00:06:43.466 "write": true, 00:06:43.466 "unmap": true, 00:06:43.466 "flush": true, 00:06:43.466 "reset": true, 00:06:43.466 "nvme_admin": false, 00:06:43.466 "nvme_io": false, 00:06:43.466 "nvme_io_md": false, 00:06:43.466 "write_zeroes": true, 00:06:43.466 "zcopy": true, 00:06:43.466 "get_zone_info": false, 00:06:43.466 "zone_management": false, 00:06:43.466 "zone_append": false, 00:06:43.466 "compare": false, 00:06:43.466 "compare_and_write": false, 00:06:43.466 "abort": true, 00:06:43.466 "seek_hole": false, 00:06:43.466 "seek_data": false, 00:06:43.466 "copy": true, 00:06:43.466 "nvme_iov_md": false 00:06:43.466 }, 00:06:43.466 "memory_domains": [ 00:06:43.466 { 00:06:43.466 "dma_device_id": "system", 00:06:43.466 "dma_device_type": 1 00:06:43.466 }, 00:06:43.466 { 00:06:43.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.467 "dma_device_type": 2 00:06:43.467 } 00:06:43.467 ], 00:06:43.467 "driver_specific": {} 00:06:43.467 } 00:06:43.467 ] 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.467 "name": "Existed_Raid", 00:06:43.467 "uuid": "b093b571-3d50-49cd-8bf1-5bbd2335316a", 00:06:43.467 "strip_size_kb": 64, 00:06:43.467 "state": "configuring", 00:06:43.467 "raid_level": "raid0", 00:06:43.467 "superblock": true, 00:06:43.467 "num_base_bdevs": 2, 00:06:43.467 "num_base_bdevs_discovered": 1, 00:06:43.467 "num_base_bdevs_operational": 2, 00:06:43.467 "base_bdevs_list": [ 00:06:43.467 { 00:06:43.467 "name": "BaseBdev1", 00:06:43.467 "uuid": "94f41c06-df91-4440-9b43-1f5a3e91c570", 00:06:43.467 "is_configured": true, 00:06:43.467 "data_offset": 2048, 00:06:43.467 "data_size": 63488 00:06:43.467 }, 00:06:43.467 { 00:06:43.467 "name": "BaseBdev2", 00:06:43.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.467 "is_configured": false, 00:06:43.467 "data_offset": 0, 00:06:43.467 "data_size": 0 00:06:43.467 } 00:06:43.467 ] 00:06:43.467 }' 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.467 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:44.052 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 [2024-11-20 16:59:07.653010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:44.053 [2024-11-20 16:59:07.653065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 [2024-11-20 16:59:07.661062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.053 [2024-11-20 16:59:07.663461] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.053 [2024-11-20 16:59:07.663510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.053 "name": "Existed_Raid", 00:06:44.053 "uuid": "a39251fd-aa53-4460-a384-cd5cf865330f", 00:06:44.053 "strip_size_kb": 64, 00:06:44.053 "state": "configuring", 00:06:44.053 "raid_level": "raid0", 00:06:44.053 "superblock": true, 00:06:44.053 "num_base_bdevs": 2, 00:06:44.053 "num_base_bdevs_discovered": 1, 00:06:44.053 "num_base_bdevs_operational": 2, 00:06:44.053 "base_bdevs_list": [ 00:06:44.053 { 00:06:44.053 "name": "BaseBdev1", 00:06:44.053 "uuid": "94f41c06-df91-4440-9b43-1f5a3e91c570", 00:06:44.053 "is_configured": true, 00:06:44.053 "data_offset": 2048, 00:06:44.053 "data_size": 63488 00:06:44.053 }, 00:06:44.053 { 00:06:44.053 "name": "BaseBdev2", 00:06:44.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.053 "is_configured": false, 00:06:44.053 "data_offset": 0, 00:06:44.053 "data_size": 0 00:06:44.053 } 00:06:44.053 ] 00:06:44.053 }' 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.053 16:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.621 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:44.621 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.621 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.621 [2024-11-20 16:59:08.245035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.622 [2024-11-20 16:59:08.245342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:44.622 [2024-11-20 16:59:08.245359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.622 BaseBdev2 00:06:44.622 [2024-11-20 16:59:08.245713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:44.622 [2024-11-20 16:59:08.245979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:44.622 [2024-11-20 16:59:08.246010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:44.622 [2024-11-20 16:59:08.246193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.622 [ 00:06:44.622 { 00:06:44.622 "name": "BaseBdev2", 00:06:44.622 "aliases": [ 00:06:44.622 "30c419f6-9e9a-4fa3-8c8f-d8960074289b" 00:06:44.622 ], 00:06:44.622 "product_name": "Malloc disk", 00:06:44.622 "block_size": 512, 00:06:44.622 "num_blocks": 65536, 00:06:44.622 "uuid": "30c419f6-9e9a-4fa3-8c8f-d8960074289b", 00:06:44.622 "assigned_rate_limits": { 00:06:44.622 "rw_ios_per_sec": 0, 00:06:44.622 "rw_mbytes_per_sec": 0, 00:06:44.622 "r_mbytes_per_sec": 0, 00:06:44.622 "w_mbytes_per_sec": 0 00:06:44.622 }, 00:06:44.622 "claimed": true, 00:06:44.622 "claim_type": "exclusive_write", 00:06:44.622 "zoned": false, 00:06:44.622 "supported_io_types": { 00:06:44.622 "read": true, 00:06:44.622 "write": true, 00:06:44.622 "unmap": true, 00:06:44.622 "flush": true, 00:06:44.622 "reset": true, 00:06:44.622 "nvme_admin": false, 00:06:44.622 "nvme_io": false, 00:06:44.622 "nvme_io_md": false, 00:06:44.622 "write_zeroes": true, 00:06:44.622 "zcopy": true, 00:06:44.622 "get_zone_info": false, 00:06:44.622 "zone_management": false, 00:06:44.622 "zone_append": false, 00:06:44.622 "compare": false, 00:06:44.622 "compare_and_write": false, 00:06:44.622 "abort": true, 00:06:44.622 "seek_hole": false, 00:06:44.622 "seek_data": false, 00:06:44.622 "copy": true, 00:06:44.622 "nvme_iov_md": false 00:06:44.622 }, 00:06:44.622 "memory_domains": [ 00:06:44.622 { 00:06:44.622 "dma_device_id": "system", 00:06:44.622 "dma_device_type": 1 00:06:44.622 }, 00:06:44.622 { 00:06:44.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.622 "dma_device_type": 2 00:06:44.622 } 00:06:44.622 ], 00:06:44.622 "driver_specific": {} 00:06:44.622 } 00:06:44.622 ] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.622 "name": "Existed_Raid", 00:06:44.622 "uuid": "a39251fd-aa53-4460-a384-cd5cf865330f", 00:06:44.622 "strip_size_kb": 64, 00:06:44.622 "state": "online", 00:06:44.622 "raid_level": "raid0", 00:06:44.622 "superblock": true, 00:06:44.622 "num_base_bdevs": 2, 00:06:44.622 "num_base_bdevs_discovered": 2, 00:06:44.622 "num_base_bdevs_operational": 2, 00:06:44.622 "base_bdevs_list": [ 00:06:44.622 { 00:06:44.622 "name": "BaseBdev1", 00:06:44.622 "uuid": "94f41c06-df91-4440-9b43-1f5a3e91c570", 00:06:44.622 "is_configured": true, 00:06:44.622 "data_offset": 2048, 00:06:44.622 "data_size": 63488 00:06:44.622 }, 00:06:44.622 { 00:06:44.622 "name": "BaseBdev2", 00:06:44.622 "uuid": "30c419f6-9e9a-4fa3-8c8f-d8960074289b", 00:06:44.622 "is_configured": true, 00:06:44.622 "data_offset": 2048, 00:06:44.622 "data_size": 63488 00:06:44.622 } 00:06:44.622 ] 00:06:44.622 }' 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.622 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 [2024-11-20 16:59:08.809617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:45.191 "name": "Existed_Raid", 00:06:45.191 "aliases": [ 00:06:45.191 "a39251fd-aa53-4460-a384-cd5cf865330f" 00:06:45.191 ], 00:06:45.191 "product_name": "Raid Volume", 00:06:45.191 "block_size": 512, 00:06:45.191 "num_blocks": 126976, 00:06:45.191 "uuid": "a39251fd-aa53-4460-a384-cd5cf865330f", 00:06:45.191 "assigned_rate_limits": { 00:06:45.191 "rw_ios_per_sec": 0, 00:06:45.191 "rw_mbytes_per_sec": 0, 00:06:45.191 "r_mbytes_per_sec": 0, 00:06:45.191 "w_mbytes_per_sec": 0 00:06:45.191 }, 00:06:45.191 "claimed": false, 00:06:45.191 "zoned": false, 00:06:45.191 "supported_io_types": { 00:06:45.191 "read": true, 00:06:45.191 "write": true, 00:06:45.191 "unmap": true, 00:06:45.191 "flush": true, 00:06:45.191 "reset": true, 00:06:45.191 "nvme_admin": false, 00:06:45.191 "nvme_io": false, 00:06:45.191 "nvme_io_md": false, 00:06:45.191 "write_zeroes": true, 00:06:45.191 "zcopy": false, 00:06:45.191 "get_zone_info": false, 00:06:45.191 "zone_management": false, 00:06:45.191 "zone_append": false, 00:06:45.191 "compare": false, 00:06:45.191 "compare_and_write": false, 00:06:45.191 "abort": false, 00:06:45.191 "seek_hole": false, 00:06:45.191 "seek_data": false, 00:06:45.191 "copy": false, 00:06:45.191 "nvme_iov_md": false 00:06:45.191 }, 00:06:45.191 "memory_domains": [ 00:06:45.191 { 00:06:45.191 "dma_device_id": "system", 00:06:45.191 "dma_device_type": 1 00:06:45.191 }, 00:06:45.191 { 00:06:45.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.191 "dma_device_type": 2 00:06:45.191 }, 00:06:45.191 { 00:06:45.191 "dma_device_id": "system", 00:06:45.191 "dma_device_type": 1 00:06:45.191 }, 00:06:45.191 { 00:06:45.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.191 "dma_device_type": 2 00:06:45.191 } 00:06:45.191 ], 00:06:45.191 "driver_specific": { 00:06:45.191 "raid": { 00:06:45.191 "uuid": "a39251fd-aa53-4460-a384-cd5cf865330f", 00:06:45.191 "strip_size_kb": 64, 00:06:45.191 "state": "online", 00:06:45.191 "raid_level": "raid0", 00:06:45.191 "superblock": true, 00:06:45.191 "num_base_bdevs": 2, 00:06:45.191 "num_base_bdevs_discovered": 2, 00:06:45.191 "num_base_bdevs_operational": 2, 00:06:45.191 "base_bdevs_list": [ 00:06:45.191 { 00:06:45.191 "name": "BaseBdev1", 00:06:45.191 "uuid": "94f41c06-df91-4440-9b43-1f5a3e91c570", 00:06:45.191 "is_configured": true, 00:06:45.191 "data_offset": 2048, 00:06:45.191 "data_size": 63488 00:06:45.191 }, 00:06:45.191 { 00:06:45.191 "name": "BaseBdev2", 00:06:45.191 "uuid": "30c419f6-9e9a-4fa3-8c8f-d8960074289b", 00:06:45.191 "is_configured": true, 00:06:45.191 "data_offset": 2048, 00:06:45.191 "data_size": 63488 00:06:45.191 } 00:06:45.191 ] 00:06:45.191 } 00:06:45.191 } 00:06:45.191 }' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:45.191 BaseBdev2' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 16:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.191 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.451 [2024-11-20 16:59:09.081445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:45.451 [2024-11-20 16:59:09.081484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.451 [2024-11-20 16:59:09.081543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.451 "name": "Existed_Raid", 00:06:45.451 "uuid": "a39251fd-aa53-4460-a384-cd5cf865330f", 00:06:45.451 "strip_size_kb": 64, 00:06:45.451 "state": "offline", 00:06:45.451 "raid_level": "raid0", 00:06:45.451 "superblock": true, 00:06:45.451 "num_base_bdevs": 2, 00:06:45.451 "num_base_bdevs_discovered": 1, 00:06:45.451 "num_base_bdevs_operational": 1, 00:06:45.451 "base_bdevs_list": [ 00:06:45.451 { 00:06:45.451 "name": null, 00:06:45.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:45.451 "is_configured": false, 00:06:45.451 "data_offset": 0, 00:06:45.451 "data_size": 63488 00:06:45.451 }, 00:06:45.451 { 00:06:45.451 "name": "BaseBdev2", 00:06:45.451 "uuid": "30c419f6-9e9a-4fa3-8c8f-d8960074289b", 00:06:45.451 "is_configured": true, 00:06:45.451 "data_offset": 2048, 00:06:45.451 "data_size": 63488 00:06:45.451 } 00:06:45.451 ] 00:06:45.451 }' 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.451 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.018 [2024-11-20 16:59:09.733393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:46.018 [2024-11-20 16:59:09.733447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.018 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60725 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60725 ']' 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60725 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.019 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60725 00:06:46.277 killing process with pid 60725 00:06:46.277 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.277 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.277 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60725' 00:06:46.277 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60725 00:06:46.277 [2024-11-20 16:59:09.892505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.277 16:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60725 00:06:46.277 [2024-11-20 16:59:09.907589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.215 16:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:47.215 00:06:47.215 real 0m5.333s 00:06:47.215 user 0m8.157s 00:06:47.215 sys 0m0.781s 00:06:47.215 16:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.215 ************************************ 00:06:47.215 END TEST raid_state_function_test_sb 00:06:47.215 ************************************ 00:06:47.215 16:59:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.215 16:59:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:47.215 16:59:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:47.215 16:59:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.215 16:59:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.215 ************************************ 00:06:47.215 START TEST raid_superblock_test 00:06:47.215 ************************************ 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:47.215 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60977 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60977 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60977 ']' 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.216 16:59:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.216 [2024-11-20 16:59:10.979380] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:47.216 [2024-11-20 16:59:10.979842] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:06:47.475 [2024-11-20 16:59:11.155182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.475 [2024-11-20 16:59:11.268994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.734 [2024-11-20 16:59:11.466023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.734 [2024-11-20 16:59:11.466058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.303 malloc1 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.303 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.303 [2024-11-20 16:59:11.935825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:48.303 [2024-11-20 16:59:11.935917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.303 [2024-11-20 16:59:11.935946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:48.303 [2024-11-20 16:59:11.935960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.303 [2024-11-20 16:59:11.938716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.304 [2024-11-20 16:59:11.938787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:48.304 pt1 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.304 malloc2 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.304 [2024-11-20 16:59:11.993554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:48.304 [2024-11-20 16:59:11.993628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.304 [2024-11-20 16:59:11.993661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:48.304 [2024-11-20 16:59:11.993673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.304 [2024-11-20 16:59:11.996542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.304 [2024-11-20 16:59:11.996579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:48.304 pt2 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.304 16:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.304 [2024-11-20 16:59:12.005602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:48.304 [2024-11-20 16:59:12.008087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:48.304 [2024-11-20 16:59:12.008294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:48.304 [2024-11-20 16:59:12.008325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:48.304 [2024-11-20 16:59:12.008574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:48.304 [2024-11-20 16:59:12.008732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:48.304 [2024-11-20 16:59:12.008750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:06:48.304 [2024-11-20 16:59:12.008980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.304 "name": "raid_bdev1", 00:06:48.304 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:48.304 "strip_size_kb": 64, 00:06:48.304 "state": "online", 00:06:48.304 "raid_level": "raid0", 00:06:48.304 "superblock": true, 00:06:48.304 "num_base_bdevs": 2, 00:06:48.304 "num_base_bdevs_discovered": 2, 00:06:48.304 "num_base_bdevs_operational": 2, 00:06:48.304 "base_bdevs_list": [ 00:06:48.304 { 00:06:48.304 "name": "pt1", 00:06:48.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:48.304 "is_configured": true, 00:06:48.304 "data_offset": 2048, 00:06:48.304 "data_size": 63488 00:06:48.304 }, 00:06:48.304 { 00:06:48.304 "name": "pt2", 00:06:48.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:48.304 "is_configured": true, 00:06:48.304 "data_offset": 2048, 00:06:48.304 "data_size": 63488 00:06:48.304 } 00:06:48.304 ] 00:06:48.304 }' 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.304 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.872 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:48.872 [2024-11-20 16:59:12.534110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:48.873 "name": "raid_bdev1", 00:06:48.873 "aliases": [ 00:06:48.873 "e43013e3-0280-471f-a64f-0c60206f273f" 00:06:48.873 ], 00:06:48.873 "product_name": "Raid Volume", 00:06:48.873 "block_size": 512, 00:06:48.873 "num_blocks": 126976, 00:06:48.873 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:48.873 "assigned_rate_limits": { 00:06:48.873 "rw_ios_per_sec": 0, 00:06:48.873 "rw_mbytes_per_sec": 0, 00:06:48.873 "r_mbytes_per_sec": 0, 00:06:48.873 "w_mbytes_per_sec": 0 00:06:48.873 }, 00:06:48.873 "claimed": false, 00:06:48.873 "zoned": false, 00:06:48.873 "supported_io_types": { 00:06:48.873 "read": true, 00:06:48.873 "write": true, 00:06:48.873 "unmap": true, 00:06:48.873 "flush": true, 00:06:48.873 "reset": true, 00:06:48.873 "nvme_admin": false, 00:06:48.873 "nvme_io": false, 00:06:48.873 "nvme_io_md": false, 00:06:48.873 "write_zeroes": true, 00:06:48.873 "zcopy": false, 00:06:48.873 "get_zone_info": false, 00:06:48.873 "zone_management": false, 00:06:48.873 "zone_append": false, 00:06:48.873 "compare": false, 00:06:48.873 "compare_and_write": false, 00:06:48.873 "abort": false, 00:06:48.873 "seek_hole": false, 00:06:48.873 "seek_data": false, 00:06:48.873 "copy": false, 00:06:48.873 "nvme_iov_md": false 00:06:48.873 }, 00:06:48.873 "memory_domains": [ 00:06:48.873 { 00:06:48.873 "dma_device_id": "system", 00:06:48.873 "dma_device_type": 1 00:06:48.873 }, 00:06:48.873 { 00:06:48.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.873 "dma_device_type": 2 00:06:48.873 }, 00:06:48.873 { 00:06:48.873 "dma_device_id": "system", 00:06:48.873 "dma_device_type": 1 00:06:48.873 }, 00:06:48.873 { 00:06:48.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.873 "dma_device_type": 2 00:06:48.873 } 00:06:48.873 ], 00:06:48.873 "driver_specific": { 00:06:48.873 "raid": { 00:06:48.873 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:48.873 "strip_size_kb": 64, 00:06:48.873 "state": "online", 00:06:48.873 "raid_level": "raid0", 00:06:48.873 "superblock": true, 00:06:48.873 "num_base_bdevs": 2, 00:06:48.873 "num_base_bdevs_discovered": 2, 00:06:48.873 "num_base_bdevs_operational": 2, 00:06:48.873 "base_bdevs_list": [ 00:06:48.873 { 00:06:48.873 "name": "pt1", 00:06:48.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:48.873 "is_configured": true, 00:06:48.873 "data_offset": 2048, 00:06:48.873 "data_size": 63488 00:06:48.873 }, 00:06:48.873 { 00:06:48.873 "name": "pt2", 00:06:48.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:48.873 "is_configured": true, 00:06:48.873 "data_offset": 2048, 00:06:48.873 "data_size": 63488 00:06:48.873 } 00:06:48.873 ] 00:06:48.873 } 00:06:48.873 } 00:06:48.873 }' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:48.873 pt2' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:48.873 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:49.133 [2024-11-20 16:59:12.802231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e43013e3-0280-471f-a64f-0c60206f273f 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e43013e3-0280-471f-a64f-0c60206f273f ']' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 [2024-11-20 16:59:12.853834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:49.133 [2024-11-20 16:59:12.853876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.133 [2024-11-20 16:59:12.853975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.133 [2024-11-20 16:59:12.854052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.133 [2024-11-20 16:59:12.854073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.133 16:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.133 [2024-11-20 16:59:12.993958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:49.133 [2024-11-20 16:59:12.996953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:49.133 [2024-11-20 16:59:12.997041] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:49.133 [2024-11-20 16:59:12.997116] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:49.133 [2024-11-20 16:59:12.997155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:49.133 [2024-11-20 16:59:12.997173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:06:49.133 request: 00:06:49.133 { 00:06:49.393 "name": "raid_bdev1", 00:06:49.393 "raid_level": "raid0", 00:06:49.393 "base_bdevs": [ 00:06:49.393 "malloc1", 00:06:49.393 "malloc2" 00:06:49.393 ], 00:06:49.394 "strip_size_kb": 64, 00:06:49.394 "superblock": false, 00:06:49.394 "method": "bdev_raid_create", 00:06:49.394 "req_id": 1 00:06:49.394 } 00:06:49.394 Got JSON-RPC error response 00:06:49.394 response: 00:06:49.394 { 00:06:49.394 "code": -17, 00:06:49.394 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:49.394 } 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.394 [2024-11-20 16:59:13.065970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:49.394 [2024-11-20 16:59:13.066044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.394 [2024-11-20 16:59:13.066070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:49.394 [2024-11-20 16:59:13.066086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.394 [2024-11-20 16:59:13.069005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.394 [2024-11-20 16:59:13.069051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:49.394 [2024-11-20 16:59:13.069192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:49.394 [2024-11-20 16:59:13.069254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:49.394 pt1 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.394 "name": "raid_bdev1", 00:06:49.394 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:49.394 "strip_size_kb": 64, 00:06:49.394 "state": "configuring", 00:06:49.394 "raid_level": "raid0", 00:06:49.394 "superblock": true, 00:06:49.394 "num_base_bdevs": 2, 00:06:49.394 "num_base_bdevs_discovered": 1, 00:06:49.394 "num_base_bdevs_operational": 2, 00:06:49.394 "base_bdevs_list": [ 00:06:49.394 { 00:06:49.394 "name": "pt1", 00:06:49.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:49.394 "is_configured": true, 00:06:49.394 "data_offset": 2048, 00:06:49.394 "data_size": 63488 00:06:49.394 }, 00:06:49.394 { 00:06:49.394 "name": null, 00:06:49.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:49.394 "is_configured": false, 00:06:49.394 "data_offset": 2048, 00:06:49.394 "data_size": 63488 00:06:49.394 } 00:06:49.394 ] 00:06:49.394 }' 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.394 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.963 [2024-11-20 16:59:13.598169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:49.963 [2024-11-20 16:59:13.598252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.963 [2024-11-20 16:59:13.598282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:49.963 [2024-11-20 16:59:13.598299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.963 [2024-11-20 16:59:13.598902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.963 [2024-11-20 16:59:13.598935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:49.963 [2024-11-20 16:59:13.599059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:49.963 [2024-11-20 16:59:13.599098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:49.963 [2024-11-20 16:59:13.599284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:49.963 [2024-11-20 16:59:13.599312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:49.963 [2024-11-20 16:59:13.599647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:49.963 [2024-11-20 16:59:13.599857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:49.963 [2024-11-20 16:59:13.599873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:49.963 [2024-11-20 16:59:13.600087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.963 pt2 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:49.963 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.964 "name": "raid_bdev1", 00:06:49.964 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:49.964 "strip_size_kb": 64, 00:06:49.964 "state": "online", 00:06:49.964 "raid_level": "raid0", 00:06:49.964 "superblock": true, 00:06:49.964 "num_base_bdevs": 2, 00:06:49.964 "num_base_bdevs_discovered": 2, 00:06:49.964 "num_base_bdevs_operational": 2, 00:06:49.964 "base_bdevs_list": [ 00:06:49.964 { 00:06:49.964 "name": "pt1", 00:06:49.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:49.964 "is_configured": true, 00:06:49.964 "data_offset": 2048, 00:06:49.964 "data_size": 63488 00:06:49.964 }, 00:06:49.964 { 00:06:49.964 "name": "pt2", 00:06:49.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:49.964 "is_configured": true, 00:06:49.964 "data_offset": 2048, 00:06:49.964 "data_size": 63488 00:06:49.964 } 00:06:49.964 ] 00:06:49.964 }' 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.964 16:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.532 [2024-11-20 16:59:14.130572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.532 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:50.532 "name": "raid_bdev1", 00:06:50.532 "aliases": [ 00:06:50.532 "e43013e3-0280-471f-a64f-0c60206f273f" 00:06:50.532 ], 00:06:50.532 "product_name": "Raid Volume", 00:06:50.532 "block_size": 512, 00:06:50.532 "num_blocks": 126976, 00:06:50.532 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:50.532 "assigned_rate_limits": { 00:06:50.532 "rw_ios_per_sec": 0, 00:06:50.532 "rw_mbytes_per_sec": 0, 00:06:50.532 "r_mbytes_per_sec": 0, 00:06:50.532 "w_mbytes_per_sec": 0 00:06:50.532 }, 00:06:50.532 "claimed": false, 00:06:50.532 "zoned": false, 00:06:50.532 "supported_io_types": { 00:06:50.532 "read": true, 00:06:50.532 "write": true, 00:06:50.532 "unmap": true, 00:06:50.532 "flush": true, 00:06:50.532 "reset": true, 00:06:50.532 "nvme_admin": false, 00:06:50.532 "nvme_io": false, 00:06:50.532 "nvme_io_md": false, 00:06:50.532 "write_zeroes": true, 00:06:50.532 "zcopy": false, 00:06:50.532 "get_zone_info": false, 00:06:50.532 "zone_management": false, 00:06:50.532 "zone_append": false, 00:06:50.532 "compare": false, 00:06:50.532 "compare_and_write": false, 00:06:50.532 "abort": false, 00:06:50.532 "seek_hole": false, 00:06:50.532 "seek_data": false, 00:06:50.532 "copy": false, 00:06:50.532 "nvme_iov_md": false 00:06:50.532 }, 00:06:50.532 "memory_domains": [ 00:06:50.532 { 00:06:50.532 "dma_device_id": "system", 00:06:50.532 "dma_device_type": 1 00:06:50.532 }, 00:06:50.532 { 00:06:50.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.532 "dma_device_type": 2 00:06:50.532 }, 00:06:50.532 { 00:06:50.532 "dma_device_id": "system", 00:06:50.532 "dma_device_type": 1 00:06:50.532 }, 00:06:50.532 { 00:06:50.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.533 "dma_device_type": 2 00:06:50.533 } 00:06:50.533 ], 00:06:50.533 "driver_specific": { 00:06:50.533 "raid": { 00:06:50.533 "uuid": "e43013e3-0280-471f-a64f-0c60206f273f", 00:06:50.533 "strip_size_kb": 64, 00:06:50.533 "state": "online", 00:06:50.533 "raid_level": "raid0", 00:06:50.533 "superblock": true, 00:06:50.533 "num_base_bdevs": 2, 00:06:50.533 "num_base_bdevs_discovered": 2, 00:06:50.533 "num_base_bdevs_operational": 2, 00:06:50.533 "base_bdevs_list": [ 00:06:50.533 { 00:06:50.533 "name": "pt1", 00:06:50.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:50.533 "is_configured": true, 00:06:50.533 "data_offset": 2048, 00:06:50.533 "data_size": 63488 00:06:50.533 }, 00:06:50.533 { 00:06:50.533 "name": "pt2", 00:06:50.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:50.533 "is_configured": true, 00:06:50.533 "data_offset": 2048, 00:06:50.533 "data_size": 63488 00:06:50.533 } 00:06:50.533 ] 00:06:50.533 } 00:06:50.533 } 00:06:50.533 }' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:50.533 pt2' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.533 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:50.533 [2024-11-20 16:59:14.390583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e43013e3-0280-471f-a64f-0c60206f273f '!=' e43013e3-0280-471f-a64f-0c60206f273f ']' 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60977 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60977 ']' 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60977 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60977 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.792 killing process with pid 60977 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60977' 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60977 00:06:50.792 [2024-11-20 16:59:14.475789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.792 [2024-11-20 16:59:14.475909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.792 16:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60977 00:06:50.792 [2024-11-20 16:59:14.475977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.792 [2024-11-20 16:59:14.475998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:50.792 [2024-11-20 16:59:14.644061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.784 16:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:51.784 00:06:51.784 real 0m4.698s 00:06:51.784 user 0m6.975s 00:06:51.784 sys 0m0.711s 00:06:51.784 16:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.784 ************************************ 00:06:51.784 END TEST raid_superblock_test 00:06:51.784 ************************************ 00:06:51.784 16:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.784 16:59:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:51.784 16:59:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:51.784 16:59:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.784 16:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.784 ************************************ 00:06:51.784 START TEST raid_read_error_test 00:06:51.784 ************************************ 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t519gkKk2e 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61183 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61183 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61183 ']' 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.784 16:59:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.043 [2024-11-20 16:59:15.754508] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:52.043 [2024-11-20 16:59:15.754717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61183 ] 00:06:52.302 [2024-11-20 16:59:15.941442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.302 [2024-11-20 16:59:16.069318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.562 [2024-11-20 16:59:16.265643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.562 [2024-11-20 16:59:16.265711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 BaseBdev1_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 true 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 [2024-11-20 16:59:16.799538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:53.130 [2024-11-20 16:59:16.799662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.130 [2024-11-20 16:59:16.799687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:53.130 [2024-11-20 16:59:16.799703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.130 [2024-11-20 16:59:16.802551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.130 [2024-11-20 16:59:16.802627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:53.130 BaseBdev1 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 BaseBdev2_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 true 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.130 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.130 [2024-11-20 16:59:16.850581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:53.130 [2024-11-20 16:59:16.850654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.130 [2024-11-20 16:59:16.850676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:53.130 [2024-11-20 16:59:16.850691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.130 [2024-11-20 16:59:16.853328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.131 [2024-11-20 16:59:16.853386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:53.131 BaseBdev2 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.131 [2024-11-20 16:59:16.858642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:53.131 [2024-11-20 16:59:16.861102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.131 [2024-11-20 16:59:16.861404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:53.131 [2024-11-20 16:59:16.861430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:53.131 [2024-11-20 16:59:16.861705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:53.131 [2024-11-20 16:59:16.861953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:53.131 [2024-11-20 16:59:16.861975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:53.131 [2024-11-20 16:59:16.862194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.131 "name": "raid_bdev1", 00:06:53.131 "uuid": "805e815d-896b-48f6-ab31-64ed9fbaee35", 00:06:53.131 "strip_size_kb": 64, 00:06:53.131 "state": "online", 00:06:53.131 "raid_level": "raid0", 00:06:53.131 "superblock": true, 00:06:53.131 "num_base_bdevs": 2, 00:06:53.131 "num_base_bdevs_discovered": 2, 00:06:53.131 "num_base_bdevs_operational": 2, 00:06:53.131 "base_bdevs_list": [ 00:06:53.131 { 00:06:53.131 "name": "BaseBdev1", 00:06:53.131 "uuid": "eb3f859b-1d02-5824-977b-662df3c51050", 00:06:53.131 "is_configured": true, 00:06:53.131 "data_offset": 2048, 00:06:53.131 "data_size": 63488 00:06:53.131 }, 00:06:53.131 { 00:06:53.131 "name": "BaseBdev2", 00:06:53.131 "uuid": "1ad750d6-8d8f-5ed1-a2c8-48c8946834cc", 00:06:53.131 "is_configured": true, 00:06:53.131 "data_offset": 2048, 00:06:53.131 "data_size": 63488 00:06:53.131 } 00:06:53.131 ] 00:06:53.131 }' 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.131 16:59:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.698 16:59:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:53.698 16:59:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:53.698 [2024-11-20 16:59:17.496102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.636 "name": "raid_bdev1", 00:06:54.636 "uuid": "805e815d-896b-48f6-ab31-64ed9fbaee35", 00:06:54.636 "strip_size_kb": 64, 00:06:54.636 "state": "online", 00:06:54.636 "raid_level": "raid0", 00:06:54.636 "superblock": true, 00:06:54.636 "num_base_bdevs": 2, 00:06:54.636 "num_base_bdevs_discovered": 2, 00:06:54.636 "num_base_bdevs_operational": 2, 00:06:54.636 "base_bdevs_list": [ 00:06:54.636 { 00:06:54.636 "name": "BaseBdev1", 00:06:54.636 "uuid": "eb3f859b-1d02-5824-977b-662df3c51050", 00:06:54.636 "is_configured": true, 00:06:54.636 "data_offset": 2048, 00:06:54.636 "data_size": 63488 00:06:54.636 }, 00:06:54.636 { 00:06:54.636 "name": "BaseBdev2", 00:06:54.636 "uuid": "1ad750d6-8d8f-5ed1-a2c8-48c8946834cc", 00:06:54.636 "is_configured": true, 00:06:54.636 "data_offset": 2048, 00:06:54.636 "data_size": 63488 00:06:54.636 } 00:06:54.636 ] 00:06:54.636 }' 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.636 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.205 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:55.205 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.205 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.205 [2024-11-20 16:59:18.921492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:55.205 [2024-11-20 16:59:18.921547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.205 [2024-11-20 16:59:18.925000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.205 [2024-11-20 16:59:18.925073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.205 [2024-11-20 16:59:18.925130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.205 [2024-11-20 16:59:18.925161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:55.205 { 00:06:55.205 "results": [ 00:06:55.205 { 00:06:55.205 "job": "raid_bdev1", 00:06:55.205 "core_mask": "0x1", 00:06:55.205 "workload": "randrw", 00:06:55.205 "percentage": 50, 00:06:55.205 "status": "finished", 00:06:55.205 "queue_depth": 1, 00:06:55.205 "io_size": 131072, 00:06:55.205 "runtime": 1.423331, 00:06:55.205 "iops": 12398.380980952428, 00:06:55.205 "mibps": 1549.7976226190535, 00:06:55.205 "io_failed": 1, 00:06:55.205 "io_timeout": 0, 00:06:55.205 "avg_latency_us": 112.15943954504245, 00:06:55.205 "min_latency_us": 34.21090909090909, 00:06:55.205 "max_latency_us": 1876.7127272727273 00:06:55.205 } 00:06:55.206 ], 00:06:55.206 "core_count": 1 00:06:55.206 } 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61183 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61183 ']' 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61183 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61183 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61183' 00:06:55.206 killing process with pid 61183 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61183 00:06:55.206 [2024-11-20 16:59:18.962047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.206 16:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61183 00:06:55.206 [2024-11-20 16:59:19.063896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t519gkKk2e 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:06:56.589 00:06:56.589 real 0m4.404s 00:06:56.589 user 0m5.578s 00:06:56.589 sys 0m0.563s 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.589 ************************************ 00:06:56.589 END TEST raid_read_error_test 00:06:56.589 ************************************ 00:06:56.589 16:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.589 16:59:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:56.589 16:59:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.589 16:59:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.589 16:59:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.589 ************************************ 00:06:56.589 START TEST raid_write_error_test 00:06:56.589 ************************************ 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jIaqPT57ng 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61334 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61334 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61334 ']' 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.589 16:59:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.589 [2024-11-20 16:59:20.211028] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:06:56.589 [2024-11-20 16:59:20.211207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:06:56.589 [2024-11-20 16:59:20.398219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.848 [2024-11-20 16:59:20.509845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.848 [2024-11-20 16:59:20.683986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.848 [2024-11-20 16:59:20.684035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 BaseBdev1_malloc 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 true 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.417 [2024-11-20 16:59:21.250903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:57.417 [2024-11-20 16:59:21.251147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.417 [2024-11-20 16:59:21.251184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:57.417 [2024-11-20 16:59:21.251202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.417 [2024-11-20 16:59:21.253914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.417 [2024-11-20 16:59:21.253959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:57.417 BaseBdev1 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.417 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.676 BaseBdev2_malloc 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.676 true 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.676 [2024-11-20 16:59:21.303433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:57.676 [2024-11-20 16:59:21.303718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.676 [2024-11-20 16:59:21.303751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:57.676 [2024-11-20 16:59:21.303779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.676 [2024-11-20 16:59:21.306439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.676 [2024-11-20 16:59:21.306498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:57.676 BaseBdev2 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.676 [2024-11-20 16:59:21.315579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.676 [2024-11-20 16:59:21.317873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.676 [2024-11-20 16:59:21.318252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:57.676 [2024-11-20 16:59:21.318282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.676 [2024-11-20 16:59:21.318571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:57.676 [2024-11-20 16:59:21.318778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:57.676 [2024-11-20 16:59:21.318797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:06:57.676 [2024-11-20 16:59:21.318981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.676 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.676 "name": "raid_bdev1", 00:06:57.676 "uuid": "7c87b18f-ca4b-4bd1-886e-ac840ac1f76a", 00:06:57.676 "strip_size_kb": 64, 00:06:57.676 "state": "online", 00:06:57.676 "raid_level": "raid0", 00:06:57.676 "superblock": true, 00:06:57.676 "num_base_bdevs": 2, 00:06:57.676 "num_base_bdevs_discovered": 2, 00:06:57.676 "num_base_bdevs_operational": 2, 00:06:57.676 "base_bdevs_list": [ 00:06:57.676 { 00:06:57.677 "name": "BaseBdev1", 00:06:57.677 "uuid": "9ae203b2-d275-57df-a56d-5cb875172344", 00:06:57.677 "is_configured": true, 00:06:57.677 "data_offset": 2048, 00:06:57.677 "data_size": 63488 00:06:57.677 }, 00:06:57.677 { 00:06:57.677 "name": "BaseBdev2", 00:06:57.677 "uuid": "02e40d87-2246-59da-98fa-6b8f86c04608", 00:06:57.677 "is_configured": true, 00:06:57.677 "data_offset": 2048, 00:06:57.677 "data_size": 63488 00:06:57.677 } 00:06:57.677 ] 00:06:57.677 }' 00:06:57.677 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.677 16:59:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.245 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:58.245 16:59:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:58.245 [2024-11-20 16:59:21.960886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.181 "name": "raid_bdev1", 00:06:59.181 "uuid": "7c87b18f-ca4b-4bd1-886e-ac840ac1f76a", 00:06:59.181 "strip_size_kb": 64, 00:06:59.181 "state": "online", 00:06:59.181 "raid_level": "raid0", 00:06:59.181 "superblock": true, 00:06:59.181 "num_base_bdevs": 2, 00:06:59.181 "num_base_bdevs_discovered": 2, 00:06:59.181 "num_base_bdevs_operational": 2, 00:06:59.181 "base_bdevs_list": [ 00:06:59.181 { 00:06:59.181 "name": "BaseBdev1", 00:06:59.181 "uuid": "9ae203b2-d275-57df-a56d-5cb875172344", 00:06:59.181 "is_configured": true, 00:06:59.181 "data_offset": 2048, 00:06:59.181 "data_size": 63488 00:06:59.181 }, 00:06:59.181 { 00:06:59.181 "name": "BaseBdev2", 00:06:59.181 "uuid": "02e40d87-2246-59da-98fa-6b8f86c04608", 00:06:59.181 "is_configured": true, 00:06:59.181 "data_offset": 2048, 00:06:59.181 "data_size": 63488 00:06:59.181 } 00:06:59.181 ] 00:06:59.181 }' 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.181 16:59:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.750 [2024-11-20 16:59:23.375547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:59.750 [2024-11-20 16:59:23.375613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.750 [2024-11-20 16:59:23.379003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.750 [2024-11-20 16:59:23.379065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.750 [2024-11-20 16:59:23.379124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.750 [2024-11-20 16:59:23.379165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:06:59.750 { 00:06:59.750 "results": [ 00:06:59.750 { 00:06:59.750 "job": "raid_bdev1", 00:06:59.750 "core_mask": "0x1", 00:06:59.750 "workload": "randrw", 00:06:59.750 "percentage": 50, 00:06:59.750 "status": "finished", 00:06:59.750 "queue_depth": 1, 00:06:59.750 "io_size": 131072, 00:06:59.750 "runtime": 1.41263, 00:06:59.750 "iops": 11666.890834825821, 00:06:59.750 "mibps": 1458.3613543532276, 00:06:59.750 "io_failed": 1, 00:06:59.750 "io_timeout": 0, 00:06:59.750 "avg_latency_us": 119.63566998709334, 00:06:59.750 "min_latency_us": 34.443636363636365, 00:06:59.750 "max_latency_us": 1921.3963636363637 00:06:59.750 } 00:06:59.750 ], 00:06:59.750 "core_count": 1 00:06:59.750 } 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61334 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61334 ']' 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61334 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61334 00:06:59.750 killing process with pid 61334 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61334' 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61334 00:06:59.750 [2024-11-20 16:59:23.420141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.750 16:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61334 00:06:59.750 [2024-11-20 16:59:23.538601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jIaqPT57ng 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.687 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.946 16:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:00.946 00:07:00.946 real 0m4.465s 00:07:00.946 user 0m5.648s 00:07:00.946 sys 0m0.547s 00:07:00.946 16:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.946 16:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.946 ************************************ 00:07:00.946 END TEST raid_write_error_test 00:07:00.946 ************************************ 00:07:00.946 16:59:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:00.946 16:59:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:00.946 16:59:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.946 16:59:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.946 16:59:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.946 ************************************ 00:07:00.946 START TEST raid_state_function_test 00:07:00.946 ************************************ 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:00.946 Process raid pid: 61472 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61472 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61472' 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61472 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61472 ']' 00:07:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.946 16:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.946 [2024-11-20 16:59:24.728344] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:00.946 [2024-11-20 16:59:24.728839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.205 [2024-11-20 16:59:24.915781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.205 [2024-11-20 16:59:25.059427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.463 [2024-11-20 16:59:25.293509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.463 [2024-11-20 16:59:25.293545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.031 [2024-11-20 16:59:25.732551] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.031 [2024-11-20 16:59:25.732813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.031 [2024-11-20 16:59:25.732849] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.031 [2024-11-20 16:59:25.732869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.031 "name": "Existed_Raid", 00:07:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.031 "strip_size_kb": 64, 00:07:02.031 "state": "configuring", 00:07:02.031 "raid_level": "concat", 00:07:02.031 "superblock": false, 00:07:02.031 "num_base_bdevs": 2, 00:07:02.031 "num_base_bdevs_discovered": 0, 00:07:02.031 "num_base_bdevs_operational": 2, 00:07:02.031 "base_bdevs_list": [ 00:07:02.031 { 00:07:02.031 "name": "BaseBdev1", 00:07:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.031 "is_configured": false, 00:07:02.031 "data_offset": 0, 00:07:02.031 "data_size": 0 00:07:02.031 }, 00:07:02.031 { 00:07:02.031 "name": "BaseBdev2", 00:07:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.031 "is_configured": false, 00:07:02.031 "data_offset": 0, 00:07:02.031 "data_size": 0 00:07:02.031 } 00:07:02.031 ] 00:07:02.031 }' 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.031 16:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 [2024-11-20 16:59:26.244609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.599 [2024-11-20 16:59:26.244647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 [2024-11-20 16:59:26.252604] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.599 [2024-11-20 16:59:26.252667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.599 [2024-11-20 16:59:26.252681] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.599 [2024-11-20 16:59:26.252697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 [2024-11-20 16:59:26.294754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.599 BaseBdev1 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 [ 00:07:02.599 { 00:07:02.599 "name": "BaseBdev1", 00:07:02.599 "aliases": [ 00:07:02.599 "a02157e1-607f-4b01-9f17-e6d46ebca8d5" 00:07:02.599 ], 00:07:02.599 "product_name": "Malloc disk", 00:07:02.599 "block_size": 512, 00:07:02.599 "num_blocks": 65536, 00:07:02.599 "uuid": "a02157e1-607f-4b01-9f17-e6d46ebca8d5", 00:07:02.599 "assigned_rate_limits": { 00:07:02.599 "rw_ios_per_sec": 0, 00:07:02.599 "rw_mbytes_per_sec": 0, 00:07:02.599 "r_mbytes_per_sec": 0, 00:07:02.599 "w_mbytes_per_sec": 0 00:07:02.599 }, 00:07:02.599 "claimed": true, 00:07:02.599 "claim_type": "exclusive_write", 00:07:02.599 "zoned": false, 00:07:02.599 "supported_io_types": { 00:07:02.599 "read": true, 00:07:02.599 "write": true, 00:07:02.599 "unmap": true, 00:07:02.599 "flush": true, 00:07:02.599 "reset": true, 00:07:02.599 "nvme_admin": false, 00:07:02.599 "nvme_io": false, 00:07:02.599 "nvme_io_md": false, 00:07:02.599 "write_zeroes": true, 00:07:02.599 "zcopy": true, 00:07:02.599 "get_zone_info": false, 00:07:02.599 "zone_management": false, 00:07:02.599 "zone_append": false, 00:07:02.599 "compare": false, 00:07:02.599 "compare_and_write": false, 00:07:02.599 "abort": true, 00:07:02.599 "seek_hole": false, 00:07:02.599 "seek_data": false, 00:07:02.599 "copy": true, 00:07:02.599 "nvme_iov_md": false 00:07:02.599 }, 00:07:02.599 "memory_domains": [ 00:07:02.599 { 00:07:02.599 "dma_device_id": "system", 00:07:02.599 "dma_device_type": 1 00:07:02.599 }, 00:07:02.599 { 00:07:02.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.599 "dma_device_type": 2 00:07:02.599 } 00:07:02.599 ], 00:07:02.599 "driver_specific": {} 00:07:02.599 } 00:07:02.599 ] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.599 "name": "Existed_Raid", 00:07:02.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.599 "strip_size_kb": 64, 00:07:02.599 "state": "configuring", 00:07:02.599 "raid_level": "concat", 00:07:02.599 "superblock": false, 00:07:02.599 "num_base_bdevs": 2, 00:07:02.599 "num_base_bdevs_discovered": 1, 00:07:02.599 "num_base_bdevs_operational": 2, 00:07:02.599 "base_bdevs_list": [ 00:07:02.599 { 00:07:02.599 "name": "BaseBdev1", 00:07:02.599 "uuid": "a02157e1-607f-4b01-9f17-e6d46ebca8d5", 00:07:02.599 "is_configured": true, 00:07:02.599 "data_offset": 0, 00:07:02.599 "data_size": 65536 00:07:02.599 }, 00:07:02.599 { 00:07:02.599 "name": "BaseBdev2", 00:07:02.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.599 "is_configured": false, 00:07:02.599 "data_offset": 0, 00:07:02.599 "data_size": 0 00:07:02.599 } 00:07:02.599 ] 00:07:02.599 }' 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.599 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.166 [2024-11-20 16:59:26.839001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.166 [2024-11-20 16:59:26.839274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.166 [2024-11-20 16:59:26.847033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.166 [2024-11-20 16:59:26.849615] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.166 [2024-11-20 16:59:26.849677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.166 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.166 "name": "Existed_Raid", 00:07:03.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.166 "strip_size_kb": 64, 00:07:03.166 "state": "configuring", 00:07:03.166 "raid_level": "concat", 00:07:03.166 "superblock": false, 00:07:03.166 "num_base_bdevs": 2, 00:07:03.166 "num_base_bdevs_discovered": 1, 00:07:03.166 "num_base_bdevs_operational": 2, 00:07:03.166 "base_bdevs_list": [ 00:07:03.166 { 00:07:03.166 "name": "BaseBdev1", 00:07:03.166 "uuid": "a02157e1-607f-4b01-9f17-e6d46ebca8d5", 00:07:03.166 "is_configured": true, 00:07:03.166 "data_offset": 0, 00:07:03.166 "data_size": 65536 00:07:03.166 }, 00:07:03.166 { 00:07:03.166 "name": "BaseBdev2", 00:07:03.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.166 "is_configured": false, 00:07:03.166 "data_offset": 0, 00:07:03.166 "data_size": 0 00:07:03.167 } 00:07:03.167 ] 00:07:03.167 }' 00:07:03.167 16:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.167 16:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.737 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.737 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.737 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.737 [2024-11-20 16:59:27.409655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.737 [2024-11-20 16:59:27.409711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.738 [2024-11-20 16:59:27.409722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.738 [2024-11-20 16:59:27.410076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.738 [2024-11-20 16:59:27.410328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.738 [2024-11-20 16:59:27.410356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.738 [2024-11-20 16:59:27.410670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.738 BaseBdev2 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.738 [ 00:07:03.738 { 00:07:03.738 "name": "BaseBdev2", 00:07:03.738 "aliases": [ 00:07:03.738 "27c806a9-023e-4fe1-8172-fe4c4310ba95" 00:07:03.738 ], 00:07:03.738 "product_name": "Malloc disk", 00:07:03.738 "block_size": 512, 00:07:03.738 "num_blocks": 65536, 00:07:03.738 "uuid": "27c806a9-023e-4fe1-8172-fe4c4310ba95", 00:07:03.738 "assigned_rate_limits": { 00:07:03.738 "rw_ios_per_sec": 0, 00:07:03.738 "rw_mbytes_per_sec": 0, 00:07:03.738 "r_mbytes_per_sec": 0, 00:07:03.738 "w_mbytes_per_sec": 0 00:07:03.738 }, 00:07:03.738 "claimed": true, 00:07:03.738 "claim_type": "exclusive_write", 00:07:03.738 "zoned": false, 00:07:03.738 "supported_io_types": { 00:07:03.738 "read": true, 00:07:03.738 "write": true, 00:07:03.738 "unmap": true, 00:07:03.738 "flush": true, 00:07:03.738 "reset": true, 00:07:03.738 "nvme_admin": false, 00:07:03.738 "nvme_io": false, 00:07:03.738 "nvme_io_md": false, 00:07:03.738 "write_zeroes": true, 00:07:03.738 "zcopy": true, 00:07:03.738 "get_zone_info": false, 00:07:03.738 "zone_management": false, 00:07:03.738 "zone_append": false, 00:07:03.738 "compare": false, 00:07:03.738 "compare_and_write": false, 00:07:03.738 "abort": true, 00:07:03.738 "seek_hole": false, 00:07:03.738 "seek_data": false, 00:07:03.738 "copy": true, 00:07:03.738 "nvme_iov_md": false 00:07:03.738 }, 00:07:03.738 "memory_domains": [ 00:07:03.738 { 00:07:03.738 "dma_device_id": "system", 00:07:03.738 "dma_device_type": 1 00:07:03.738 }, 00:07:03.738 { 00:07:03.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.738 "dma_device_type": 2 00:07:03.738 } 00:07:03.738 ], 00:07:03.738 "driver_specific": {} 00:07:03.738 } 00:07:03.738 ] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.738 "name": "Existed_Raid", 00:07:03.738 "uuid": "8e49035b-5782-4970-81f9-4e91ca1505c0", 00:07:03.738 "strip_size_kb": 64, 00:07:03.738 "state": "online", 00:07:03.738 "raid_level": "concat", 00:07:03.738 "superblock": false, 00:07:03.738 "num_base_bdevs": 2, 00:07:03.738 "num_base_bdevs_discovered": 2, 00:07:03.738 "num_base_bdevs_operational": 2, 00:07:03.738 "base_bdevs_list": [ 00:07:03.738 { 00:07:03.738 "name": "BaseBdev1", 00:07:03.738 "uuid": "a02157e1-607f-4b01-9f17-e6d46ebca8d5", 00:07:03.738 "is_configured": true, 00:07:03.738 "data_offset": 0, 00:07:03.738 "data_size": 65536 00:07:03.738 }, 00:07:03.738 { 00:07:03.738 "name": "BaseBdev2", 00:07:03.738 "uuid": "27c806a9-023e-4fe1-8172-fe4c4310ba95", 00:07:03.738 "is_configured": true, 00:07:03.738 "data_offset": 0, 00:07:03.738 "data_size": 65536 00:07:03.738 } 00:07:03.738 ] 00:07:03.738 }' 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.738 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 [2024-11-20 16:59:27.962248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.306 16:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.306 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.306 "name": "Existed_Raid", 00:07:04.306 "aliases": [ 00:07:04.306 "8e49035b-5782-4970-81f9-4e91ca1505c0" 00:07:04.306 ], 00:07:04.306 "product_name": "Raid Volume", 00:07:04.306 "block_size": 512, 00:07:04.306 "num_blocks": 131072, 00:07:04.306 "uuid": "8e49035b-5782-4970-81f9-4e91ca1505c0", 00:07:04.306 "assigned_rate_limits": { 00:07:04.306 "rw_ios_per_sec": 0, 00:07:04.306 "rw_mbytes_per_sec": 0, 00:07:04.306 "r_mbytes_per_sec": 0, 00:07:04.306 "w_mbytes_per_sec": 0 00:07:04.306 }, 00:07:04.306 "claimed": false, 00:07:04.306 "zoned": false, 00:07:04.306 "supported_io_types": { 00:07:04.306 "read": true, 00:07:04.306 "write": true, 00:07:04.306 "unmap": true, 00:07:04.306 "flush": true, 00:07:04.306 "reset": true, 00:07:04.306 "nvme_admin": false, 00:07:04.306 "nvme_io": false, 00:07:04.306 "nvme_io_md": false, 00:07:04.306 "write_zeroes": true, 00:07:04.306 "zcopy": false, 00:07:04.306 "get_zone_info": false, 00:07:04.306 "zone_management": false, 00:07:04.306 "zone_append": false, 00:07:04.306 "compare": false, 00:07:04.306 "compare_and_write": false, 00:07:04.306 "abort": false, 00:07:04.306 "seek_hole": false, 00:07:04.306 "seek_data": false, 00:07:04.306 "copy": false, 00:07:04.306 "nvme_iov_md": false 00:07:04.307 }, 00:07:04.307 "memory_domains": [ 00:07:04.307 { 00:07:04.307 "dma_device_id": "system", 00:07:04.307 "dma_device_type": 1 00:07:04.307 }, 00:07:04.307 { 00:07:04.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.307 "dma_device_type": 2 00:07:04.307 }, 00:07:04.307 { 00:07:04.307 "dma_device_id": "system", 00:07:04.307 "dma_device_type": 1 00:07:04.307 }, 00:07:04.307 { 00:07:04.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.307 "dma_device_type": 2 00:07:04.307 } 00:07:04.307 ], 00:07:04.307 "driver_specific": { 00:07:04.307 "raid": { 00:07:04.307 "uuid": "8e49035b-5782-4970-81f9-4e91ca1505c0", 00:07:04.307 "strip_size_kb": 64, 00:07:04.307 "state": "online", 00:07:04.307 "raid_level": "concat", 00:07:04.307 "superblock": false, 00:07:04.307 "num_base_bdevs": 2, 00:07:04.307 "num_base_bdevs_discovered": 2, 00:07:04.307 "num_base_bdevs_operational": 2, 00:07:04.307 "base_bdevs_list": [ 00:07:04.307 { 00:07:04.307 "name": "BaseBdev1", 00:07:04.307 "uuid": "a02157e1-607f-4b01-9f17-e6d46ebca8d5", 00:07:04.307 "is_configured": true, 00:07:04.307 "data_offset": 0, 00:07:04.307 "data_size": 65536 00:07:04.307 }, 00:07:04.307 { 00:07:04.307 "name": "BaseBdev2", 00:07:04.307 "uuid": "27c806a9-023e-4fe1-8172-fe4c4310ba95", 00:07:04.307 "is_configured": true, 00:07:04.307 "data_offset": 0, 00:07:04.307 "data_size": 65536 00:07:04.307 } 00:07:04.307 ] 00:07:04.307 } 00:07:04.307 } 00:07:04.307 }' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:04.307 BaseBdev2' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.307 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 [2024-11-20 16:59:28.230251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.566 [2024-11-20 16:59:28.230463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.566 [2024-11-20 16:59:28.230549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.566 "name": "Existed_Raid", 00:07:04.566 "uuid": "8e49035b-5782-4970-81f9-4e91ca1505c0", 00:07:04.566 "strip_size_kb": 64, 00:07:04.566 "state": "offline", 00:07:04.566 "raid_level": "concat", 00:07:04.566 "superblock": false, 00:07:04.566 "num_base_bdevs": 2, 00:07:04.566 "num_base_bdevs_discovered": 1, 00:07:04.566 "num_base_bdevs_operational": 1, 00:07:04.566 "base_bdevs_list": [ 00:07:04.566 { 00:07:04.566 "name": null, 00:07:04.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.566 "is_configured": false, 00:07:04.566 "data_offset": 0, 00:07:04.566 "data_size": 65536 00:07:04.566 }, 00:07:04.566 { 00:07:04.566 "name": "BaseBdev2", 00:07:04.566 "uuid": "27c806a9-023e-4fe1-8172-fe4c4310ba95", 00:07:04.566 "is_configured": true, 00:07:04.566 "data_offset": 0, 00:07:04.566 "data_size": 65536 00:07:04.566 } 00:07:04.566 ] 00:07:04.566 }' 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.566 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 [2024-11-20 16:59:28.901896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.135 [2024-11-20 16:59:28.902169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 16:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61472 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61472 ']' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61472 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61472 00:07:05.395 killing process with pid 61472 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61472' 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61472 00:07:05.395 [2024-11-20 16:59:29.067640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.395 16:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61472 00:07:05.395 [2024-11-20 16:59:29.081781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.331 00:07:06.331 real 0m5.406s 00:07:06.331 user 0m8.209s 00:07:06.331 sys 0m0.827s 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.331 ************************************ 00:07:06.331 END TEST raid_state_function_test 00:07:06.331 ************************************ 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.331 16:59:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:06.331 16:59:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:06.331 16:59:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.331 16:59:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.331 ************************************ 00:07:06.331 START TEST raid_state_function_test_sb 00:07:06.331 ************************************ 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:06.331 Process raid pid: 61731 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61731 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61731' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61731 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61731 ']' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.331 16:59:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.331 [2024-11-20 16:59:30.181627] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:06.331 [2024-11-20 16:59:30.181888] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.591 [2024-11-20 16:59:30.364518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.850 [2024-11-20 16:59:30.500092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.850 [2024-11-20 16:59:30.708937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.850 [2024-11-20 16:59:30.708994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.417 [2024-11-20 16:59:31.106218] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.417 [2024-11-20 16:59:31.106308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.417 [2024-11-20 16:59:31.106323] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.417 [2024-11-20 16:59:31.106337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.417 "name": "Existed_Raid", 00:07:07.417 "uuid": "3c0afd7d-7238-46ab-8070-4d44474df3a7", 00:07:07.417 "strip_size_kb": 64, 00:07:07.417 "state": "configuring", 00:07:07.417 "raid_level": "concat", 00:07:07.417 "superblock": true, 00:07:07.417 "num_base_bdevs": 2, 00:07:07.417 "num_base_bdevs_discovered": 0, 00:07:07.417 "num_base_bdevs_operational": 2, 00:07:07.417 "base_bdevs_list": [ 00:07:07.417 { 00:07:07.417 "name": "BaseBdev1", 00:07:07.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.417 "is_configured": false, 00:07:07.417 "data_offset": 0, 00:07:07.417 "data_size": 0 00:07:07.417 }, 00:07:07.417 { 00:07:07.417 "name": "BaseBdev2", 00:07:07.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.417 "is_configured": false, 00:07:07.417 "data_offset": 0, 00:07:07.417 "data_size": 0 00:07:07.417 } 00:07:07.417 ] 00:07:07.417 }' 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.417 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 [2024-11-20 16:59:31.670270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.049 [2024-11-20 16:59:31.670467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 [2024-11-20 16:59:31.678305] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.049 [2024-11-20 16:59:31.678404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.049 [2024-11-20 16:59:31.678419] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.049 [2024-11-20 16:59:31.678452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 [2024-11-20 16:59:31.720451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.049 BaseBdev1 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.049 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.049 [ 00:07:08.049 { 00:07:08.049 "name": "BaseBdev1", 00:07:08.049 "aliases": [ 00:07:08.049 "8551994d-18d1-45ff-92f9-0dc5a5f7185b" 00:07:08.049 ], 00:07:08.049 "product_name": "Malloc disk", 00:07:08.049 "block_size": 512, 00:07:08.049 "num_blocks": 65536, 00:07:08.049 "uuid": "8551994d-18d1-45ff-92f9-0dc5a5f7185b", 00:07:08.049 "assigned_rate_limits": { 00:07:08.049 "rw_ios_per_sec": 0, 00:07:08.049 "rw_mbytes_per_sec": 0, 00:07:08.049 "r_mbytes_per_sec": 0, 00:07:08.049 "w_mbytes_per_sec": 0 00:07:08.049 }, 00:07:08.049 "claimed": true, 00:07:08.049 "claim_type": "exclusive_write", 00:07:08.049 "zoned": false, 00:07:08.049 "supported_io_types": { 00:07:08.049 "read": true, 00:07:08.049 "write": true, 00:07:08.049 "unmap": true, 00:07:08.049 "flush": true, 00:07:08.049 "reset": true, 00:07:08.049 "nvme_admin": false, 00:07:08.049 "nvme_io": false, 00:07:08.049 "nvme_io_md": false, 00:07:08.049 "write_zeroes": true, 00:07:08.049 "zcopy": true, 00:07:08.049 "get_zone_info": false, 00:07:08.049 "zone_management": false, 00:07:08.049 "zone_append": false, 00:07:08.049 "compare": false, 00:07:08.049 "compare_and_write": false, 00:07:08.049 "abort": true, 00:07:08.049 "seek_hole": false, 00:07:08.049 "seek_data": false, 00:07:08.049 "copy": true, 00:07:08.049 "nvme_iov_md": false 00:07:08.049 }, 00:07:08.049 "memory_domains": [ 00:07:08.049 { 00:07:08.049 "dma_device_id": "system", 00:07:08.049 "dma_device_type": 1 00:07:08.049 }, 00:07:08.049 { 00:07:08.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.050 "dma_device_type": 2 00:07:08.050 } 00:07:08.050 ], 00:07:08.050 "driver_specific": {} 00:07:08.050 } 00:07:08.050 ] 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.050 "name": "Existed_Raid", 00:07:08.050 "uuid": "69f33cf0-dd99-4e37-849f-85976e17dc85", 00:07:08.050 "strip_size_kb": 64, 00:07:08.050 "state": "configuring", 00:07:08.050 "raid_level": "concat", 00:07:08.050 "superblock": true, 00:07:08.050 "num_base_bdevs": 2, 00:07:08.050 "num_base_bdevs_discovered": 1, 00:07:08.050 "num_base_bdevs_operational": 2, 00:07:08.050 "base_bdevs_list": [ 00:07:08.050 { 00:07:08.050 "name": "BaseBdev1", 00:07:08.050 "uuid": "8551994d-18d1-45ff-92f9-0dc5a5f7185b", 00:07:08.050 "is_configured": true, 00:07:08.050 "data_offset": 2048, 00:07:08.050 "data_size": 63488 00:07:08.050 }, 00:07:08.050 { 00:07:08.050 "name": "BaseBdev2", 00:07:08.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.050 "is_configured": false, 00:07:08.050 "data_offset": 0, 00:07:08.050 "data_size": 0 00:07:08.050 } 00:07:08.050 ] 00:07:08.050 }' 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.050 16:59:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 [2024-11-20 16:59:32.244641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.644 [2024-11-20 16:59:32.244711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 [2024-11-20 16:59:32.252654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.644 [2024-11-20 16:59:32.255056] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.644 [2024-11-20 16:59:32.255120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.644 "name": "Existed_Raid", 00:07:08.644 "uuid": "b09d9d0d-8354-4113-b560-ffe0aa7a1943", 00:07:08.644 "strip_size_kb": 64, 00:07:08.644 "state": "configuring", 00:07:08.644 "raid_level": "concat", 00:07:08.644 "superblock": true, 00:07:08.644 "num_base_bdevs": 2, 00:07:08.644 "num_base_bdevs_discovered": 1, 00:07:08.644 "num_base_bdevs_operational": 2, 00:07:08.644 "base_bdevs_list": [ 00:07:08.644 { 00:07:08.644 "name": "BaseBdev1", 00:07:08.644 "uuid": "8551994d-18d1-45ff-92f9-0dc5a5f7185b", 00:07:08.644 "is_configured": true, 00:07:08.644 "data_offset": 2048, 00:07:08.644 "data_size": 63488 00:07:08.644 }, 00:07:08.644 { 00:07:08.644 "name": "BaseBdev2", 00:07:08.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.644 "is_configured": false, 00:07:08.644 "data_offset": 0, 00:07:08.644 "data_size": 0 00:07:08.644 } 00:07:08.644 ] 00:07:08.644 }' 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.644 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.903 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:08.903 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.903 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 [2024-11-20 16:59:32.793085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.163 [2024-11-20 16:59:32.793389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.163 [2024-11-20 16:59:32.793439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.163 [2024-11-20 16:59:32.793757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.163 BaseBdev2 00:07:09.163 [2024-11-20 16:59:32.794006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.163 [2024-11-20 16:59:32.794029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:09.163 [2024-11-20 16:59:32.794183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 [ 00:07:09.163 { 00:07:09.163 "name": "BaseBdev2", 00:07:09.163 "aliases": [ 00:07:09.163 "3e7d9da3-313f-41d8-a948-9302fe5393ea" 00:07:09.163 ], 00:07:09.163 "product_name": "Malloc disk", 00:07:09.163 "block_size": 512, 00:07:09.163 "num_blocks": 65536, 00:07:09.163 "uuid": "3e7d9da3-313f-41d8-a948-9302fe5393ea", 00:07:09.163 "assigned_rate_limits": { 00:07:09.163 "rw_ios_per_sec": 0, 00:07:09.163 "rw_mbytes_per_sec": 0, 00:07:09.163 "r_mbytes_per_sec": 0, 00:07:09.163 "w_mbytes_per_sec": 0 00:07:09.163 }, 00:07:09.163 "claimed": true, 00:07:09.163 "claim_type": "exclusive_write", 00:07:09.163 "zoned": false, 00:07:09.163 "supported_io_types": { 00:07:09.163 "read": true, 00:07:09.163 "write": true, 00:07:09.163 "unmap": true, 00:07:09.163 "flush": true, 00:07:09.163 "reset": true, 00:07:09.163 "nvme_admin": false, 00:07:09.163 "nvme_io": false, 00:07:09.163 "nvme_io_md": false, 00:07:09.163 "write_zeroes": true, 00:07:09.163 "zcopy": true, 00:07:09.163 "get_zone_info": false, 00:07:09.163 "zone_management": false, 00:07:09.163 "zone_append": false, 00:07:09.163 "compare": false, 00:07:09.163 "compare_and_write": false, 00:07:09.163 "abort": true, 00:07:09.163 "seek_hole": false, 00:07:09.163 "seek_data": false, 00:07:09.163 "copy": true, 00:07:09.163 "nvme_iov_md": false 00:07:09.163 }, 00:07:09.163 "memory_domains": [ 00:07:09.163 { 00:07:09.163 "dma_device_id": "system", 00:07:09.163 "dma_device_type": 1 00:07:09.163 }, 00:07:09.163 { 00:07:09.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.163 "dma_device_type": 2 00:07:09.163 } 00:07:09.163 ], 00:07:09.163 "driver_specific": {} 00:07:09.163 } 00:07:09.163 ] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.163 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.164 "name": "Existed_Raid", 00:07:09.164 "uuid": "b09d9d0d-8354-4113-b560-ffe0aa7a1943", 00:07:09.164 "strip_size_kb": 64, 00:07:09.164 "state": "online", 00:07:09.164 "raid_level": "concat", 00:07:09.164 "superblock": true, 00:07:09.164 "num_base_bdevs": 2, 00:07:09.164 "num_base_bdevs_discovered": 2, 00:07:09.164 "num_base_bdevs_operational": 2, 00:07:09.164 "base_bdevs_list": [ 00:07:09.164 { 00:07:09.164 "name": "BaseBdev1", 00:07:09.164 "uuid": "8551994d-18d1-45ff-92f9-0dc5a5f7185b", 00:07:09.164 "is_configured": true, 00:07:09.164 "data_offset": 2048, 00:07:09.164 "data_size": 63488 00:07:09.164 }, 00:07:09.164 { 00:07:09.164 "name": "BaseBdev2", 00:07:09.164 "uuid": "3e7d9da3-313f-41d8-a948-9302fe5393ea", 00:07:09.164 "is_configured": true, 00:07:09.164 "data_offset": 2048, 00:07:09.164 "data_size": 63488 00:07:09.164 } 00:07:09.164 ] 00:07:09.164 }' 00:07:09.164 16:59:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.164 16:59:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 [2024-11-20 16:59:33.333660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.734 "name": "Existed_Raid", 00:07:09.734 "aliases": [ 00:07:09.734 "b09d9d0d-8354-4113-b560-ffe0aa7a1943" 00:07:09.734 ], 00:07:09.734 "product_name": "Raid Volume", 00:07:09.734 "block_size": 512, 00:07:09.734 "num_blocks": 126976, 00:07:09.734 "uuid": "b09d9d0d-8354-4113-b560-ffe0aa7a1943", 00:07:09.734 "assigned_rate_limits": { 00:07:09.734 "rw_ios_per_sec": 0, 00:07:09.734 "rw_mbytes_per_sec": 0, 00:07:09.734 "r_mbytes_per_sec": 0, 00:07:09.734 "w_mbytes_per_sec": 0 00:07:09.734 }, 00:07:09.734 "claimed": false, 00:07:09.734 "zoned": false, 00:07:09.734 "supported_io_types": { 00:07:09.734 "read": true, 00:07:09.734 "write": true, 00:07:09.734 "unmap": true, 00:07:09.734 "flush": true, 00:07:09.734 "reset": true, 00:07:09.734 "nvme_admin": false, 00:07:09.734 "nvme_io": false, 00:07:09.734 "nvme_io_md": false, 00:07:09.734 "write_zeroes": true, 00:07:09.734 "zcopy": false, 00:07:09.734 "get_zone_info": false, 00:07:09.734 "zone_management": false, 00:07:09.734 "zone_append": false, 00:07:09.734 "compare": false, 00:07:09.734 "compare_and_write": false, 00:07:09.734 "abort": false, 00:07:09.734 "seek_hole": false, 00:07:09.734 "seek_data": false, 00:07:09.734 "copy": false, 00:07:09.734 "nvme_iov_md": false 00:07:09.734 }, 00:07:09.734 "memory_domains": [ 00:07:09.734 { 00:07:09.734 "dma_device_id": "system", 00:07:09.734 "dma_device_type": 1 00:07:09.734 }, 00:07:09.734 { 00:07:09.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.734 "dma_device_type": 2 00:07:09.734 }, 00:07:09.734 { 00:07:09.734 "dma_device_id": "system", 00:07:09.734 "dma_device_type": 1 00:07:09.734 }, 00:07:09.734 { 00:07:09.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.734 "dma_device_type": 2 00:07:09.734 } 00:07:09.734 ], 00:07:09.734 "driver_specific": { 00:07:09.734 "raid": { 00:07:09.734 "uuid": "b09d9d0d-8354-4113-b560-ffe0aa7a1943", 00:07:09.734 "strip_size_kb": 64, 00:07:09.734 "state": "online", 00:07:09.734 "raid_level": "concat", 00:07:09.734 "superblock": true, 00:07:09.734 "num_base_bdevs": 2, 00:07:09.734 "num_base_bdevs_discovered": 2, 00:07:09.734 "num_base_bdevs_operational": 2, 00:07:09.734 "base_bdevs_list": [ 00:07:09.734 { 00:07:09.734 "name": "BaseBdev1", 00:07:09.734 "uuid": "8551994d-18d1-45ff-92f9-0dc5a5f7185b", 00:07:09.734 "is_configured": true, 00:07:09.734 "data_offset": 2048, 00:07:09.734 "data_size": 63488 00:07:09.734 }, 00:07:09.734 { 00:07:09.734 "name": "BaseBdev2", 00:07:09.734 "uuid": "3e7d9da3-313f-41d8-a948-9302fe5393ea", 00:07:09.734 "is_configured": true, 00:07:09.734 "data_offset": 2048, 00:07:09.734 "data_size": 63488 00:07:09.734 } 00:07:09.734 ] 00:07:09.734 } 00:07:09.734 } 00:07:09.734 }' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:09.734 BaseBdev2' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.734 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.734 [2024-11-20 16:59:33.593410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:09.734 [2024-11-20 16:59:33.593452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.734 [2024-11-20 16:59:33.593510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.994 "name": "Existed_Raid", 00:07:09.994 "uuid": "b09d9d0d-8354-4113-b560-ffe0aa7a1943", 00:07:09.994 "strip_size_kb": 64, 00:07:09.994 "state": "offline", 00:07:09.994 "raid_level": "concat", 00:07:09.994 "superblock": true, 00:07:09.994 "num_base_bdevs": 2, 00:07:09.994 "num_base_bdevs_discovered": 1, 00:07:09.994 "num_base_bdevs_operational": 1, 00:07:09.994 "base_bdevs_list": [ 00:07:09.994 { 00:07:09.994 "name": null, 00:07:09.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.994 "is_configured": false, 00:07:09.994 "data_offset": 0, 00:07:09.994 "data_size": 63488 00:07:09.994 }, 00:07:09.994 { 00:07:09.994 "name": "BaseBdev2", 00:07:09.994 "uuid": "3e7d9da3-313f-41d8-a948-9302fe5393ea", 00:07:09.994 "is_configured": true, 00:07:09.994 "data_offset": 2048, 00:07:09.994 "data_size": 63488 00:07:09.994 } 00:07:09.994 ] 00:07:09.994 }' 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.994 16:59:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.563 [2024-11-20 16:59:34.262493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.563 [2024-11-20 16:59:34.262569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61731 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61731 ']' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61731 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61731 00:07:10.563 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.822 killing process with pid 61731 00:07:10.822 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.822 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61731' 00:07:10.822 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61731 00:07:10.822 [2024-11-20 16:59:34.430028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.822 16:59:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61731 00:07:10.822 [2024-11-20 16:59:34.444131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.761 16:59:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:11.761 00:07:11.761 real 0m5.322s 00:07:11.761 user 0m8.104s 00:07:11.761 sys 0m0.743s 00:07:11.761 16:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.761 16:59:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.761 ************************************ 00:07:11.761 END TEST raid_state_function_test_sb 00:07:11.761 ************************************ 00:07:11.761 16:59:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:11.761 16:59:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:11.761 16:59:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.761 16:59:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.761 ************************************ 00:07:11.761 START TEST raid_superblock_test 00:07:11.761 ************************************ 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61983 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61983 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61983 ']' 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.761 16:59:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.761 [2024-11-20 16:59:35.538185] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:11.761 [2024-11-20 16:59:35.538347] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61983 ] 00:07:12.021 [2024-11-20 16:59:35.706082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.021 [2024-11-20 16:59:35.829325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.280 [2024-11-20 16:59:36.026990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.280 [2024-11-20 16:59:36.027221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 malloc1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-11-20 16:59:36.596142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:12.849 [2024-11-20 16:59:36.596379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.849 [2024-11-20 16:59:36.596477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:12.849 [2024-11-20 16:59:36.596674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.849 [2024-11-20 16:59:36.599552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.849 [2024-11-20 16:59:36.599820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:12.849 pt1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 malloc2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-11-20 16:59:36.650674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.849 [2024-11-20 16:59:36.650754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.849 [2024-11-20 16:59:36.650817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:12.849 [2024-11-20 16:59:36.650835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.849 [2024-11-20 16:59:36.653589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.849 [2024-11-20 16:59:36.653630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.849 pt2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-11-20 16:59:36.662736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:12.849 [2024-11-20 16:59:36.665328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.849 [2024-11-20 16:59:36.665515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.849 [2024-11-20 16:59:36.665533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.849 [2024-11-20 16:59:36.665863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.849 [2024-11-20 16:59:36.666093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.849 [2024-11-20 16:59:36.666121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:12.849 [2024-11-20 16:59:36.666306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.849 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.850 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.109 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.109 "name": "raid_bdev1", 00:07:13.109 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:13.109 "strip_size_kb": 64, 00:07:13.109 "state": "online", 00:07:13.109 "raid_level": "concat", 00:07:13.109 "superblock": true, 00:07:13.109 "num_base_bdevs": 2, 00:07:13.109 "num_base_bdevs_discovered": 2, 00:07:13.109 "num_base_bdevs_operational": 2, 00:07:13.109 "base_bdevs_list": [ 00:07:13.109 { 00:07:13.109 "name": "pt1", 00:07:13.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.109 "is_configured": true, 00:07:13.109 "data_offset": 2048, 00:07:13.109 "data_size": 63488 00:07:13.109 }, 00:07:13.109 { 00:07:13.109 "name": "pt2", 00:07:13.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.109 "is_configured": true, 00:07:13.109 "data_offset": 2048, 00:07:13.109 "data_size": 63488 00:07:13.109 } 00:07:13.109 ] 00:07:13.109 }' 00:07:13.109 16:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.109 16:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.368 [2024-11-20 16:59:37.195245] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.368 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.627 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.627 "name": "raid_bdev1", 00:07:13.627 "aliases": [ 00:07:13.627 "2ee6f765-e117-427f-9ecf-bf7619024e20" 00:07:13.627 ], 00:07:13.627 "product_name": "Raid Volume", 00:07:13.627 "block_size": 512, 00:07:13.627 "num_blocks": 126976, 00:07:13.627 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:13.627 "assigned_rate_limits": { 00:07:13.627 "rw_ios_per_sec": 0, 00:07:13.627 "rw_mbytes_per_sec": 0, 00:07:13.627 "r_mbytes_per_sec": 0, 00:07:13.627 "w_mbytes_per_sec": 0 00:07:13.627 }, 00:07:13.627 "claimed": false, 00:07:13.627 "zoned": false, 00:07:13.627 "supported_io_types": { 00:07:13.627 "read": true, 00:07:13.627 "write": true, 00:07:13.627 "unmap": true, 00:07:13.627 "flush": true, 00:07:13.627 "reset": true, 00:07:13.627 "nvme_admin": false, 00:07:13.627 "nvme_io": false, 00:07:13.627 "nvme_io_md": false, 00:07:13.627 "write_zeroes": true, 00:07:13.627 "zcopy": false, 00:07:13.627 "get_zone_info": false, 00:07:13.627 "zone_management": false, 00:07:13.627 "zone_append": false, 00:07:13.627 "compare": false, 00:07:13.627 "compare_and_write": false, 00:07:13.627 "abort": false, 00:07:13.627 "seek_hole": false, 00:07:13.627 "seek_data": false, 00:07:13.627 "copy": false, 00:07:13.627 "nvme_iov_md": false 00:07:13.627 }, 00:07:13.627 "memory_domains": [ 00:07:13.627 { 00:07:13.627 "dma_device_id": "system", 00:07:13.627 "dma_device_type": 1 00:07:13.627 }, 00:07:13.627 { 00:07:13.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.627 "dma_device_type": 2 00:07:13.627 }, 00:07:13.627 { 00:07:13.627 "dma_device_id": "system", 00:07:13.627 "dma_device_type": 1 00:07:13.627 }, 00:07:13.627 { 00:07:13.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.627 "dma_device_type": 2 00:07:13.627 } 00:07:13.627 ], 00:07:13.627 "driver_specific": { 00:07:13.627 "raid": { 00:07:13.627 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:13.627 "strip_size_kb": 64, 00:07:13.627 "state": "online", 00:07:13.627 "raid_level": "concat", 00:07:13.627 "superblock": true, 00:07:13.627 "num_base_bdevs": 2, 00:07:13.627 "num_base_bdevs_discovered": 2, 00:07:13.627 "num_base_bdevs_operational": 2, 00:07:13.627 "base_bdevs_list": [ 00:07:13.627 { 00:07:13.627 "name": "pt1", 00:07:13.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.627 "is_configured": true, 00:07:13.627 "data_offset": 2048, 00:07:13.627 "data_size": 63488 00:07:13.627 }, 00:07:13.627 { 00:07:13.627 "name": "pt2", 00:07:13.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.627 "is_configured": true, 00:07:13.627 "data_offset": 2048, 00:07:13.627 "data_size": 63488 00:07:13.627 } 00:07:13.627 ] 00:07:13.627 } 00:07:13.627 } 00:07:13.627 }' 00:07:13.627 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.627 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:13.627 pt2' 00:07:13.627 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.628 [2024-11-20 16:59:37.459306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.628 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ee6f765-e117-427f-9ecf-bf7619024e20 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2ee6f765-e117-427f-9ecf-bf7619024e20 ']' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 [2024-11-20 16:59:37.506924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.888 [2024-11-20 16:59:37.506960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.888 [2024-11-20 16:59:37.507050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.888 [2024-11-20 16:59:37.507112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.888 [2024-11-20 16:59:37.507156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 [2024-11-20 16:59:37.647004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:13.888 [2024-11-20 16:59:37.649605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:13.888 [2024-11-20 16:59:37.649715] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:13.888 [2024-11-20 16:59:37.649830] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:13.888 [2024-11-20 16:59:37.649858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.888 [2024-11-20 16:59:37.649874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:13.888 request: 00:07:13.888 { 00:07:13.888 "name": "raid_bdev1", 00:07:13.888 "raid_level": "concat", 00:07:13.888 "base_bdevs": [ 00:07:13.888 "malloc1", 00:07:13.888 "malloc2" 00:07:13.888 ], 00:07:13.888 "strip_size_kb": 64, 00:07:13.888 "superblock": false, 00:07:13.888 "method": "bdev_raid_create", 00:07:13.888 "req_id": 1 00:07:13.888 } 00:07:13.888 Got JSON-RPC error response 00:07:13.888 response: 00:07:13.888 { 00:07:13.888 "code": -17, 00:07:13.888 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:13.888 } 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.888 [2024-11-20 16:59:37.711046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:13.888 [2024-11-20 16:59:37.711141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.888 [2024-11-20 16:59:37.711188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:13.888 [2024-11-20 16:59:37.711204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.888 [2024-11-20 16:59:37.714181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.888 [2024-11-20 16:59:37.714397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:13.888 [2024-11-20 16:59:37.714499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:13.888 [2024-11-20 16:59:37.714583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:13.888 pt1 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:13.888 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.889 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.148 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.148 "name": "raid_bdev1", 00:07:14.148 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:14.148 "strip_size_kb": 64, 00:07:14.148 "state": "configuring", 00:07:14.148 "raid_level": "concat", 00:07:14.148 "superblock": true, 00:07:14.148 "num_base_bdevs": 2, 00:07:14.148 "num_base_bdevs_discovered": 1, 00:07:14.148 "num_base_bdevs_operational": 2, 00:07:14.148 "base_bdevs_list": [ 00:07:14.148 { 00:07:14.148 "name": "pt1", 00:07:14.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.148 "is_configured": true, 00:07:14.148 "data_offset": 2048, 00:07:14.148 "data_size": 63488 00:07:14.148 }, 00:07:14.148 { 00:07:14.148 "name": null, 00:07:14.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.148 "is_configured": false, 00:07:14.148 "data_offset": 2048, 00:07:14.148 "data_size": 63488 00:07:14.148 } 00:07:14.148 ] 00:07:14.148 }' 00:07:14.148 16:59:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.148 16:59:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.407 [2024-11-20 16:59:38.243284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:14.407 [2024-11-20 16:59:38.243495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.407 [2024-11-20 16:59:38.243534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:14.407 [2024-11-20 16:59:38.243553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.407 [2024-11-20 16:59:38.244203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.407 [2024-11-20 16:59:38.244272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:14.407 [2024-11-20 16:59:38.244367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:14.407 [2024-11-20 16:59:38.244406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:14.407 [2024-11-20 16:59:38.244536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:14.407 [2024-11-20 16:59:38.244572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.407 [2024-11-20 16:59:38.244936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:14.407 [2024-11-20 16:59:38.245118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:14.407 [2024-11-20 16:59:38.245134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:14.407 [2024-11-20 16:59:38.245343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.407 pt2 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.407 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.667 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.667 "name": "raid_bdev1", 00:07:14.667 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:14.667 "strip_size_kb": 64, 00:07:14.667 "state": "online", 00:07:14.667 "raid_level": "concat", 00:07:14.667 "superblock": true, 00:07:14.667 "num_base_bdevs": 2, 00:07:14.667 "num_base_bdevs_discovered": 2, 00:07:14.667 "num_base_bdevs_operational": 2, 00:07:14.667 "base_bdevs_list": [ 00:07:14.667 { 00:07:14.667 "name": "pt1", 00:07:14.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.667 "is_configured": true, 00:07:14.667 "data_offset": 2048, 00:07:14.667 "data_size": 63488 00:07:14.667 }, 00:07:14.667 { 00:07:14.667 "name": "pt2", 00:07:14.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.667 "is_configured": true, 00:07:14.667 "data_offset": 2048, 00:07:14.667 "data_size": 63488 00:07:14.667 } 00:07:14.667 ] 00:07:14.667 }' 00:07:14.667 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.667 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.927 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.927 [2024-11-20 16:59:38.787737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.186 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.186 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.186 "name": "raid_bdev1", 00:07:15.186 "aliases": [ 00:07:15.186 "2ee6f765-e117-427f-9ecf-bf7619024e20" 00:07:15.186 ], 00:07:15.186 "product_name": "Raid Volume", 00:07:15.186 "block_size": 512, 00:07:15.186 "num_blocks": 126976, 00:07:15.186 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:15.186 "assigned_rate_limits": { 00:07:15.186 "rw_ios_per_sec": 0, 00:07:15.186 "rw_mbytes_per_sec": 0, 00:07:15.186 "r_mbytes_per_sec": 0, 00:07:15.186 "w_mbytes_per_sec": 0 00:07:15.186 }, 00:07:15.186 "claimed": false, 00:07:15.186 "zoned": false, 00:07:15.186 "supported_io_types": { 00:07:15.186 "read": true, 00:07:15.186 "write": true, 00:07:15.186 "unmap": true, 00:07:15.186 "flush": true, 00:07:15.186 "reset": true, 00:07:15.186 "nvme_admin": false, 00:07:15.186 "nvme_io": false, 00:07:15.186 "nvme_io_md": false, 00:07:15.186 "write_zeroes": true, 00:07:15.186 "zcopy": false, 00:07:15.186 "get_zone_info": false, 00:07:15.186 "zone_management": false, 00:07:15.186 "zone_append": false, 00:07:15.186 "compare": false, 00:07:15.186 "compare_and_write": false, 00:07:15.187 "abort": false, 00:07:15.187 "seek_hole": false, 00:07:15.187 "seek_data": false, 00:07:15.187 "copy": false, 00:07:15.187 "nvme_iov_md": false 00:07:15.187 }, 00:07:15.187 "memory_domains": [ 00:07:15.187 { 00:07:15.187 "dma_device_id": "system", 00:07:15.187 "dma_device_type": 1 00:07:15.187 }, 00:07:15.187 { 00:07:15.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.187 "dma_device_type": 2 00:07:15.187 }, 00:07:15.187 { 00:07:15.187 "dma_device_id": "system", 00:07:15.187 "dma_device_type": 1 00:07:15.187 }, 00:07:15.187 { 00:07:15.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.187 "dma_device_type": 2 00:07:15.187 } 00:07:15.187 ], 00:07:15.187 "driver_specific": { 00:07:15.187 "raid": { 00:07:15.187 "uuid": "2ee6f765-e117-427f-9ecf-bf7619024e20", 00:07:15.187 "strip_size_kb": 64, 00:07:15.187 "state": "online", 00:07:15.187 "raid_level": "concat", 00:07:15.187 "superblock": true, 00:07:15.187 "num_base_bdevs": 2, 00:07:15.187 "num_base_bdevs_discovered": 2, 00:07:15.187 "num_base_bdevs_operational": 2, 00:07:15.187 "base_bdevs_list": [ 00:07:15.187 { 00:07:15.187 "name": "pt1", 00:07:15.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.187 "is_configured": true, 00:07:15.187 "data_offset": 2048, 00:07:15.187 "data_size": 63488 00:07:15.187 }, 00:07:15.187 { 00:07:15.187 "name": "pt2", 00:07:15.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.187 "is_configured": true, 00:07:15.187 "data_offset": 2048, 00:07:15.187 "data_size": 63488 00:07:15.187 } 00:07:15.187 ] 00:07:15.187 } 00:07:15.187 } 00:07:15.187 }' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:15.187 pt2' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.187 16:59:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.187 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.447 [2024-11-20 16:59:39.055843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2ee6f765-e117-427f-9ecf-bf7619024e20 '!=' 2ee6f765-e117-427f-9ecf-bf7619024e20 ']' 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61983 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61983 ']' 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61983 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61983 00:07:15.447 killing process with pid 61983 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.447 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.448 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61983' 00:07:15.448 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61983 00:07:15.448 [2024-11-20 16:59:39.141069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.448 16:59:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61983 00:07:15.448 [2024-11-20 16:59:39.141214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.448 [2024-11-20 16:59:39.141273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.448 [2024-11-20 16:59:39.141293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:15.448 [2024-11-20 16:59:39.312270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.396 ************************************ 00:07:16.396 END TEST raid_superblock_test 00:07:16.396 ************************************ 00:07:16.396 16:59:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:16.396 00:07:16.396 real 0m4.781s 00:07:16.396 user 0m7.182s 00:07:16.396 sys 0m0.670s 00:07:16.396 16:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.396 16:59:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.656 16:59:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:16.656 16:59:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:16.656 16:59:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.656 16:59:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.656 ************************************ 00:07:16.656 START TEST raid_read_error_test 00:07:16.656 ************************************ 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OHprJ5gUm5 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62200 00:07:16.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62200 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62200 ']' 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.656 16:59:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.656 [2024-11-20 16:59:40.396483] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:16.656 [2024-11-20 16:59:40.397133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62200 ] 00:07:16.916 [2024-11-20 16:59:40.579273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.916 [2024-11-20 16:59:40.689745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.176 [2024-11-20 16:59:40.868843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.176 [2024-11-20 16:59:40.869113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 BaseBdev1_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 true 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 [2024-11-20 16:59:41.435032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.745 [2024-11-20 16:59:41.435099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.745 [2024-11-20 16:59:41.435128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.745 [2024-11-20 16:59:41.435145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.745 [2024-11-20 16:59:41.438129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.745 [2024-11-20 16:59:41.438235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.745 BaseBdev1 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 BaseBdev2_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 true 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.745 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.745 [2024-11-20 16:59:41.489338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.745 [2024-11-20 16:59:41.489424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.745 [2024-11-20 16:59:41.489462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.745 [2024-11-20 16:59:41.489477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.746 [2024-11-20 16:59:41.492184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.746 [2024-11-20 16:59:41.492393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.746 BaseBdev2 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.746 [2024-11-20 16:59:41.497405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.746 [2024-11-20 16:59:41.499812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.746 [2024-11-20 16:59:41.500064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:17.746 [2024-11-20 16:59:41.500100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.746 [2024-11-20 16:59:41.500365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:17.746 [2024-11-20 16:59:41.500552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:17.746 [2024-11-20 16:59:41.500571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:17.746 [2024-11-20 16:59:41.500730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.746 "name": "raid_bdev1", 00:07:17.746 "uuid": "5a3a6163-07c2-4b36-bbd1-8a3e6bb5b5ff", 00:07:17.746 "strip_size_kb": 64, 00:07:17.746 "state": "online", 00:07:17.746 "raid_level": "concat", 00:07:17.746 "superblock": true, 00:07:17.746 "num_base_bdevs": 2, 00:07:17.746 "num_base_bdevs_discovered": 2, 00:07:17.746 "num_base_bdevs_operational": 2, 00:07:17.746 "base_bdevs_list": [ 00:07:17.746 { 00:07:17.746 "name": "BaseBdev1", 00:07:17.746 "uuid": "f95c222f-87fd-5646-b282-984df1d8f030", 00:07:17.746 "is_configured": true, 00:07:17.746 "data_offset": 2048, 00:07:17.746 "data_size": 63488 00:07:17.746 }, 00:07:17.746 { 00:07:17.746 "name": "BaseBdev2", 00:07:17.746 "uuid": "676e311f-cceb-5dca-ae71-f45486d149ac", 00:07:17.746 "is_configured": true, 00:07:17.746 "data_offset": 2048, 00:07:17.746 "data_size": 63488 00:07:17.746 } 00:07:17.746 ] 00:07:17.746 }' 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.746 16:59:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.314 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:18.314 16:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.314 [2024-11-20 16:59:42.115017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:19.252 16:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:19.252 16:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.252 16:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.252 "name": "raid_bdev1", 00:07:19.252 "uuid": "5a3a6163-07c2-4b36-bbd1-8a3e6bb5b5ff", 00:07:19.252 "strip_size_kb": 64, 00:07:19.252 "state": "online", 00:07:19.252 "raid_level": "concat", 00:07:19.252 "superblock": true, 00:07:19.252 "num_base_bdevs": 2, 00:07:19.252 "num_base_bdevs_discovered": 2, 00:07:19.252 "num_base_bdevs_operational": 2, 00:07:19.252 "base_bdevs_list": [ 00:07:19.252 { 00:07:19.252 "name": "BaseBdev1", 00:07:19.252 "uuid": "f95c222f-87fd-5646-b282-984df1d8f030", 00:07:19.252 "is_configured": true, 00:07:19.252 "data_offset": 2048, 00:07:19.252 "data_size": 63488 00:07:19.252 }, 00:07:19.252 { 00:07:19.252 "name": "BaseBdev2", 00:07:19.252 "uuid": "676e311f-cceb-5dca-ae71-f45486d149ac", 00:07:19.252 "is_configured": true, 00:07:19.252 "data_offset": 2048, 00:07:19.252 "data_size": 63488 00:07:19.252 } 00:07:19.252 ] 00:07:19.252 }' 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.252 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.821 [2024-11-20 16:59:43.532827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.821 [2024-11-20 16:59:43.533025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.821 [2024-11-20 16:59:43.536800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.821 [2024-11-20 16:59:43.537101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.821 { 00:07:19.821 "results": [ 00:07:19.821 { 00:07:19.821 "job": "raid_bdev1", 00:07:19.821 "core_mask": "0x1", 00:07:19.821 "workload": "randrw", 00:07:19.821 "percentage": 50, 00:07:19.821 "status": "finished", 00:07:19.821 "queue_depth": 1, 00:07:19.821 "io_size": 131072, 00:07:19.821 "runtime": 1.415631, 00:07:19.821 "iops": 12315.356190984798, 00:07:19.821 "mibps": 1539.4195238730997, 00:07:19.821 "io_failed": 1, 00:07:19.821 "io_timeout": 0, 00:07:19.821 "avg_latency_us": 112.85892515055922, 00:07:19.821 "min_latency_us": 33.74545454545454, 00:07:19.821 "max_latency_us": 1839.4763636363637 00:07:19.821 } 00:07:19.821 ], 00:07:19.821 "core_count": 1 00:07:19.821 } 00:07:19.821 [2024-11-20 16:59:43.537311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.821 [2024-11-20 16:59:43.537342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62200 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62200 ']' 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62200 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62200 00:07:19.821 killing process with pid 62200 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62200' 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62200 00:07:19.821 16:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62200 00:07:19.821 [2024-11-20 16:59:43.579514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.080 [2024-11-20 16:59:43.693604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OHprJ5gUm5 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.019 ************************************ 00:07:21.019 END TEST raid_read_error_test 00:07:21.019 ************************************ 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:21.019 00:07:21.019 real 0m4.397s 00:07:21.019 user 0m5.554s 00:07:21.019 sys 0m0.534s 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.019 16:59:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 16:59:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:21.019 16:59:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.019 16:59:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.019 16:59:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 ************************************ 00:07:21.019 START TEST raid_write_error_test 00:07:21.019 ************************************ 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:21.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C2ZuWiVVJs 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62340 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62340 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62340 ']' 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.019 16:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.019 [2024-11-20 16:59:44.821861] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:21.020 [2024-11-20 16:59:44.822021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62340 ] 00:07:21.278 [2024-11-20 16:59:44.989307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.279 [2024-11-20 16:59:45.104328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.537 [2024-11-20 16:59:45.282309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.537 [2024-11-20 16:59:45.282390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.106 BaseBdev1_malloc 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.106 true 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.106 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.106 [2024-11-20 16:59:45.873543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:22.106 [2024-11-20 16:59:45.873615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.106 [2024-11-20 16:59:45.873642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:22.107 [2024-11-20 16:59:45.873657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.107 [2024-11-20 16:59:45.876556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.107 [2024-11-20 16:59:45.876789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:22.107 BaseBdev1 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.107 BaseBdev2_malloc 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.107 true 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.107 [2024-11-20 16:59:45.943399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:22.107 [2024-11-20 16:59:45.943465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.107 [2024-11-20 16:59:45.943490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:22.107 [2024-11-20 16:59:45.943507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.107 [2024-11-20 16:59:45.946470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.107 [2024-11-20 16:59:45.946527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:22.107 BaseBdev2 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.107 [2024-11-20 16:59:45.951469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.107 [2024-11-20 16:59:45.954158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.107 [2024-11-20 16:59:45.954559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.107 [2024-11-20 16:59:45.954708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.107 [2024-11-20 16:59:45.955109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:22.107 [2024-11-20 16:59:45.955530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.107 [2024-11-20 16:59:45.955688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:22.107 [2024-11-20 16:59:45.956127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.107 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.366 16:59:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.366 16:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.366 "name": "raid_bdev1", 00:07:22.366 "uuid": "e4a66d44-349b-42ca-8aef-5aae5eb03bd1", 00:07:22.366 "strip_size_kb": 64, 00:07:22.366 "state": "online", 00:07:22.366 "raid_level": "concat", 00:07:22.366 "superblock": true, 00:07:22.366 "num_base_bdevs": 2, 00:07:22.366 "num_base_bdevs_discovered": 2, 00:07:22.366 "num_base_bdevs_operational": 2, 00:07:22.366 "base_bdevs_list": [ 00:07:22.366 { 00:07:22.366 "name": "BaseBdev1", 00:07:22.366 "uuid": "0c23a56d-3e62-5dd2-b2a5-a1708ea2f709", 00:07:22.366 "is_configured": true, 00:07:22.366 "data_offset": 2048, 00:07:22.367 "data_size": 63488 00:07:22.367 }, 00:07:22.367 { 00:07:22.367 "name": "BaseBdev2", 00:07:22.367 "uuid": "af38933a-2a64-52cf-ae16-1315713050b4", 00:07:22.367 "is_configured": true, 00:07:22.367 "data_offset": 2048, 00:07:22.367 "data_size": 63488 00:07:22.367 } 00:07:22.367 ] 00:07:22.367 }' 00:07:22.367 16:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.367 16:59:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.625 16:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.625 16:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.884 [2024-11-20 16:59:46.609468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:23.820 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:23.820 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.820 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.821 "name": "raid_bdev1", 00:07:23.821 "uuid": "e4a66d44-349b-42ca-8aef-5aae5eb03bd1", 00:07:23.821 "strip_size_kb": 64, 00:07:23.821 "state": "online", 00:07:23.821 "raid_level": "concat", 00:07:23.821 "superblock": true, 00:07:23.821 "num_base_bdevs": 2, 00:07:23.821 "num_base_bdevs_discovered": 2, 00:07:23.821 "num_base_bdevs_operational": 2, 00:07:23.821 "base_bdevs_list": [ 00:07:23.821 { 00:07:23.821 "name": "BaseBdev1", 00:07:23.821 "uuid": "0c23a56d-3e62-5dd2-b2a5-a1708ea2f709", 00:07:23.821 "is_configured": true, 00:07:23.821 "data_offset": 2048, 00:07:23.821 "data_size": 63488 00:07:23.821 }, 00:07:23.821 { 00:07:23.821 "name": "BaseBdev2", 00:07:23.821 "uuid": "af38933a-2a64-52cf-ae16-1315713050b4", 00:07:23.821 "is_configured": true, 00:07:23.821 "data_offset": 2048, 00:07:23.821 "data_size": 63488 00:07:23.821 } 00:07:23.821 ] 00:07:23.821 }' 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.821 16:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.388 [2024-11-20 16:59:48.032315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.388 [2024-11-20 16:59:48.032511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.388 [2024-11-20 16:59:48.036072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.388 [2024-11-20 16:59:48.036170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.388 [2024-11-20 16:59:48.036208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.388 [2024-11-20 16:59:48.036224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:24.388 { 00:07:24.388 "results": [ 00:07:24.388 { 00:07:24.388 "job": "raid_bdev1", 00:07:24.388 "core_mask": "0x1", 00:07:24.388 "workload": "randrw", 00:07:24.388 "percentage": 50, 00:07:24.388 "status": "finished", 00:07:24.388 "queue_depth": 1, 00:07:24.388 "io_size": 131072, 00:07:24.388 "runtime": 1.420817, 00:07:24.388 "iops": 11385.702733005024, 00:07:24.388 "mibps": 1423.212841625628, 00:07:24.388 "io_failed": 1, 00:07:24.388 "io_timeout": 0, 00:07:24.388 "avg_latency_us": 121.9363598152373, 00:07:24.388 "min_latency_us": 34.67636363636364, 00:07:24.388 "max_latency_us": 1690.530909090909 00:07:24.388 } 00:07:24.388 ], 00:07:24.388 "core_count": 1 00:07:24.388 } 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62340 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62340 ']' 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62340 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62340 00:07:24.388 killing process with pid 62340 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62340' 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62340 00:07:24.388 [2024-11-20 16:59:48.076207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.388 16:59:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62340 00:07:24.388 [2024-11-20 16:59:48.196744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C2ZuWiVVJs 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:25.818 00:07:25.818 real 0m4.557s 00:07:25.818 user 0m5.765s 00:07:25.818 sys 0m0.526s 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.818 ************************************ 00:07:25.818 END TEST raid_write_error_test 00:07:25.818 ************************************ 00:07:25.818 16:59:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 16:59:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:25.818 16:59:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:25.818 16:59:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.818 16:59:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.818 16:59:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 ************************************ 00:07:25.818 START TEST raid_state_function_test 00:07:25.818 ************************************ 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.818 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:25.819 Process raid pid: 62478 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62478 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62478' 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62478 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62478 ']' 00:07:25.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.819 16:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.819 [2024-11-20 16:59:49.444372] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:25.819 [2024-11-20 16:59:49.444545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.819 [2024-11-20 16:59:49.627964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.077 [2024-11-20 16:59:49.753225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.077 [2024-11-20 16:59:49.940672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.077 [2024-11-20 16:59:49.940727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.645 [2024-11-20 16:59:50.431580] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.645 [2024-11-20 16:59:50.431854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.645 [2024-11-20 16:59:50.431896] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.645 [2024-11-20 16:59:50.431917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.645 "name": "Existed_Raid", 00:07:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.645 "strip_size_kb": 0, 00:07:26.645 "state": "configuring", 00:07:26.645 "raid_level": "raid1", 00:07:26.645 "superblock": false, 00:07:26.645 "num_base_bdevs": 2, 00:07:26.645 "num_base_bdevs_discovered": 0, 00:07:26.645 "num_base_bdevs_operational": 2, 00:07:26.645 "base_bdevs_list": [ 00:07:26.645 { 00:07:26.645 "name": "BaseBdev1", 00:07:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.645 "is_configured": false, 00:07:26.645 "data_offset": 0, 00:07:26.645 "data_size": 0 00:07:26.645 }, 00:07:26.645 { 00:07:26.645 "name": "BaseBdev2", 00:07:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.645 "is_configured": false, 00:07:26.645 "data_offset": 0, 00:07:26.645 "data_size": 0 00:07:26.645 } 00:07:26.645 ] 00:07:26.645 }' 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.645 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 [2024-11-20 16:59:50.943694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.213 [2024-11-20 16:59:50.943955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 [2024-11-20 16:59:50.951655] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.213 [2024-11-20 16:59:50.951720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.213 [2024-11-20 16:59:50.951735] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.213 [2024-11-20 16:59:50.951768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 [2024-11-20 16:59:50.993800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.213 BaseBdev1 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 [ 00:07:27.213 { 00:07:27.213 "name": "BaseBdev1", 00:07:27.213 "aliases": [ 00:07:27.213 "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc" 00:07:27.213 ], 00:07:27.213 "product_name": "Malloc disk", 00:07:27.213 "block_size": 512, 00:07:27.213 "num_blocks": 65536, 00:07:27.213 "uuid": "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc", 00:07:27.213 "assigned_rate_limits": { 00:07:27.213 "rw_ios_per_sec": 0, 00:07:27.213 "rw_mbytes_per_sec": 0, 00:07:27.213 "r_mbytes_per_sec": 0, 00:07:27.213 "w_mbytes_per_sec": 0 00:07:27.213 }, 00:07:27.213 "claimed": true, 00:07:27.213 "claim_type": "exclusive_write", 00:07:27.213 "zoned": false, 00:07:27.213 "supported_io_types": { 00:07:27.213 "read": true, 00:07:27.213 "write": true, 00:07:27.213 "unmap": true, 00:07:27.213 "flush": true, 00:07:27.213 "reset": true, 00:07:27.213 "nvme_admin": false, 00:07:27.213 "nvme_io": false, 00:07:27.213 "nvme_io_md": false, 00:07:27.213 "write_zeroes": true, 00:07:27.213 "zcopy": true, 00:07:27.213 "get_zone_info": false, 00:07:27.213 "zone_management": false, 00:07:27.213 "zone_append": false, 00:07:27.213 "compare": false, 00:07:27.213 "compare_and_write": false, 00:07:27.213 "abort": true, 00:07:27.213 "seek_hole": false, 00:07:27.213 "seek_data": false, 00:07:27.213 "copy": true, 00:07:27.213 "nvme_iov_md": false 00:07:27.213 }, 00:07:27.213 "memory_domains": [ 00:07:27.213 { 00:07:27.213 "dma_device_id": "system", 00:07:27.213 "dma_device_type": 1 00:07:27.213 }, 00:07:27.213 { 00:07:27.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.213 "dma_device_type": 2 00:07:27.213 } 00:07:27.213 ], 00:07:27.213 "driver_specific": {} 00:07:27.213 } 00:07:27.213 ] 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.213 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.473 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.473 "name": "Existed_Raid", 00:07:27.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.473 "strip_size_kb": 0, 00:07:27.473 "state": "configuring", 00:07:27.473 "raid_level": "raid1", 00:07:27.473 "superblock": false, 00:07:27.473 "num_base_bdevs": 2, 00:07:27.473 "num_base_bdevs_discovered": 1, 00:07:27.473 "num_base_bdevs_operational": 2, 00:07:27.473 "base_bdevs_list": [ 00:07:27.473 { 00:07:27.473 "name": "BaseBdev1", 00:07:27.473 "uuid": "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc", 00:07:27.473 "is_configured": true, 00:07:27.473 "data_offset": 0, 00:07:27.473 "data_size": 65536 00:07:27.473 }, 00:07:27.473 { 00:07:27.473 "name": "BaseBdev2", 00:07:27.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.473 "is_configured": false, 00:07:27.473 "data_offset": 0, 00:07:27.473 "data_size": 0 00:07:27.473 } 00:07:27.473 ] 00:07:27.473 }' 00:07:27.473 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.473 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.733 [2024-11-20 16:59:51.546072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.733 [2024-11-20 16:59:51.546133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.733 [2024-11-20 16:59:51.554076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.733 [2024-11-20 16:59:51.556584] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.733 [2024-11-20 16:59:51.556854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.733 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.992 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.992 "name": "Existed_Raid", 00:07:27.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.992 "strip_size_kb": 0, 00:07:27.992 "state": "configuring", 00:07:27.992 "raid_level": "raid1", 00:07:27.992 "superblock": false, 00:07:27.992 "num_base_bdevs": 2, 00:07:27.992 "num_base_bdevs_discovered": 1, 00:07:27.992 "num_base_bdevs_operational": 2, 00:07:27.992 "base_bdevs_list": [ 00:07:27.992 { 00:07:27.992 "name": "BaseBdev1", 00:07:27.992 "uuid": "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc", 00:07:27.992 "is_configured": true, 00:07:27.992 "data_offset": 0, 00:07:27.992 "data_size": 65536 00:07:27.992 }, 00:07:27.992 { 00:07:27.992 "name": "BaseBdev2", 00:07:27.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.992 "is_configured": false, 00:07:27.992 "data_offset": 0, 00:07:27.992 "data_size": 0 00:07:27.992 } 00:07:27.992 ] 00:07:27.992 }' 00:07:27.992 16:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.992 16:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.252 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.252 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.252 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.511 [2024-11-20 16:59:52.137561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.511 [2024-11-20 16:59:52.137897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.511 [2024-11-20 16:59:52.137951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:28.511 [2024-11-20 16:59:52.138302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.511 [2024-11-20 16:59:52.138521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.511 [2024-11-20 16:59:52.138541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:28.511 [2024-11-20 16:59:52.139064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.511 BaseBdev2 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.511 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.511 [ 00:07:28.511 { 00:07:28.511 "name": "BaseBdev2", 00:07:28.511 "aliases": [ 00:07:28.511 "878ba781-3f1e-41f8-b9c4-8d83ed6ff32a" 00:07:28.511 ], 00:07:28.511 "product_name": "Malloc disk", 00:07:28.511 "block_size": 512, 00:07:28.511 "num_blocks": 65536, 00:07:28.511 "uuid": "878ba781-3f1e-41f8-b9c4-8d83ed6ff32a", 00:07:28.511 "assigned_rate_limits": { 00:07:28.511 "rw_ios_per_sec": 0, 00:07:28.511 "rw_mbytes_per_sec": 0, 00:07:28.511 "r_mbytes_per_sec": 0, 00:07:28.511 "w_mbytes_per_sec": 0 00:07:28.511 }, 00:07:28.511 "claimed": true, 00:07:28.511 "claim_type": "exclusive_write", 00:07:28.511 "zoned": false, 00:07:28.511 "supported_io_types": { 00:07:28.511 "read": true, 00:07:28.511 "write": true, 00:07:28.511 "unmap": true, 00:07:28.511 "flush": true, 00:07:28.511 "reset": true, 00:07:28.512 "nvme_admin": false, 00:07:28.512 "nvme_io": false, 00:07:28.512 "nvme_io_md": false, 00:07:28.512 "write_zeroes": true, 00:07:28.512 "zcopy": true, 00:07:28.512 "get_zone_info": false, 00:07:28.512 "zone_management": false, 00:07:28.512 "zone_append": false, 00:07:28.512 "compare": false, 00:07:28.512 "compare_and_write": false, 00:07:28.512 "abort": true, 00:07:28.512 "seek_hole": false, 00:07:28.512 "seek_data": false, 00:07:28.512 "copy": true, 00:07:28.512 "nvme_iov_md": false 00:07:28.512 }, 00:07:28.512 "memory_domains": [ 00:07:28.512 { 00:07:28.512 "dma_device_id": "system", 00:07:28.512 "dma_device_type": 1 00:07:28.512 }, 00:07:28.512 { 00:07:28.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.512 "dma_device_type": 2 00:07:28.512 } 00:07:28.512 ], 00:07:28.512 "driver_specific": {} 00:07:28.512 } 00:07:28.512 ] 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.512 "name": "Existed_Raid", 00:07:28.512 "uuid": "cc2ceed2-fd2c-4dc0-9d85-be5b7cc2d9e7", 00:07:28.512 "strip_size_kb": 0, 00:07:28.512 "state": "online", 00:07:28.512 "raid_level": "raid1", 00:07:28.512 "superblock": false, 00:07:28.512 "num_base_bdevs": 2, 00:07:28.512 "num_base_bdevs_discovered": 2, 00:07:28.512 "num_base_bdevs_operational": 2, 00:07:28.512 "base_bdevs_list": [ 00:07:28.512 { 00:07:28.512 "name": "BaseBdev1", 00:07:28.512 "uuid": "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc", 00:07:28.512 "is_configured": true, 00:07:28.512 "data_offset": 0, 00:07:28.512 "data_size": 65536 00:07:28.512 }, 00:07:28.512 { 00:07:28.512 "name": "BaseBdev2", 00:07:28.512 "uuid": "878ba781-3f1e-41f8-b9c4-8d83ed6ff32a", 00:07:28.512 "is_configured": true, 00:07:28.512 "data_offset": 0, 00:07:28.512 "data_size": 65536 00:07:28.512 } 00:07:28.512 ] 00:07:28.512 }' 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.512 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.081 [2024-11-20 16:59:52.686246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.081 "name": "Existed_Raid", 00:07:29.081 "aliases": [ 00:07:29.081 "cc2ceed2-fd2c-4dc0-9d85-be5b7cc2d9e7" 00:07:29.081 ], 00:07:29.081 "product_name": "Raid Volume", 00:07:29.081 "block_size": 512, 00:07:29.081 "num_blocks": 65536, 00:07:29.081 "uuid": "cc2ceed2-fd2c-4dc0-9d85-be5b7cc2d9e7", 00:07:29.081 "assigned_rate_limits": { 00:07:29.081 "rw_ios_per_sec": 0, 00:07:29.081 "rw_mbytes_per_sec": 0, 00:07:29.081 "r_mbytes_per_sec": 0, 00:07:29.081 "w_mbytes_per_sec": 0 00:07:29.081 }, 00:07:29.081 "claimed": false, 00:07:29.081 "zoned": false, 00:07:29.081 "supported_io_types": { 00:07:29.081 "read": true, 00:07:29.081 "write": true, 00:07:29.081 "unmap": false, 00:07:29.081 "flush": false, 00:07:29.081 "reset": true, 00:07:29.081 "nvme_admin": false, 00:07:29.081 "nvme_io": false, 00:07:29.081 "nvme_io_md": false, 00:07:29.081 "write_zeroes": true, 00:07:29.081 "zcopy": false, 00:07:29.081 "get_zone_info": false, 00:07:29.081 "zone_management": false, 00:07:29.081 "zone_append": false, 00:07:29.081 "compare": false, 00:07:29.081 "compare_and_write": false, 00:07:29.081 "abort": false, 00:07:29.081 "seek_hole": false, 00:07:29.081 "seek_data": false, 00:07:29.081 "copy": false, 00:07:29.081 "nvme_iov_md": false 00:07:29.081 }, 00:07:29.081 "memory_domains": [ 00:07:29.081 { 00:07:29.081 "dma_device_id": "system", 00:07:29.081 "dma_device_type": 1 00:07:29.081 }, 00:07:29.081 { 00:07:29.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.081 "dma_device_type": 2 00:07:29.081 }, 00:07:29.081 { 00:07:29.081 "dma_device_id": "system", 00:07:29.081 "dma_device_type": 1 00:07:29.081 }, 00:07:29.081 { 00:07:29.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.081 "dma_device_type": 2 00:07:29.081 } 00:07:29.081 ], 00:07:29.081 "driver_specific": { 00:07:29.081 "raid": { 00:07:29.081 "uuid": "cc2ceed2-fd2c-4dc0-9d85-be5b7cc2d9e7", 00:07:29.081 "strip_size_kb": 0, 00:07:29.081 "state": "online", 00:07:29.081 "raid_level": "raid1", 00:07:29.081 "superblock": false, 00:07:29.081 "num_base_bdevs": 2, 00:07:29.081 "num_base_bdevs_discovered": 2, 00:07:29.081 "num_base_bdevs_operational": 2, 00:07:29.081 "base_bdevs_list": [ 00:07:29.081 { 00:07:29.081 "name": "BaseBdev1", 00:07:29.081 "uuid": "d1be36c7-1df4-43a5-ba25-48a12a0f1ecc", 00:07:29.081 "is_configured": true, 00:07:29.081 "data_offset": 0, 00:07:29.081 "data_size": 65536 00:07:29.081 }, 00:07:29.081 { 00:07:29.081 "name": "BaseBdev2", 00:07:29.081 "uuid": "878ba781-3f1e-41f8-b9c4-8d83ed6ff32a", 00:07:29.081 "is_configured": true, 00:07:29.081 "data_offset": 0, 00:07:29.081 "data_size": 65536 00:07:29.081 } 00:07:29.081 ] 00:07:29.081 } 00:07:29.081 } 00:07:29.081 }' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.081 BaseBdev2' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.081 16:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.341 [2024-11-20 16:59:52.949968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.341 "name": "Existed_Raid", 00:07:29.341 "uuid": "cc2ceed2-fd2c-4dc0-9d85-be5b7cc2d9e7", 00:07:29.341 "strip_size_kb": 0, 00:07:29.341 "state": "online", 00:07:29.341 "raid_level": "raid1", 00:07:29.341 "superblock": false, 00:07:29.341 "num_base_bdevs": 2, 00:07:29.341 "num_base_bdevs_discovered": 1, 00:07:29.341 "num_base_bdevs_operational": 1, 00:07:29.341 "base_bdevs_list": [ 00:07:29.341 { 00:07:29.341 "name": null, 00:07:29.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.341 "is_configured": false, 00:07:29.341 "data_offset": 0, 00:07:29.341 "data_size": 65536 00:07:29.341 }, 00:07:29.341 { 00:07:29.341 "name": "BaseBdev2", 00:07:29.341 "uuid": "878ba781-3f1e-41f8-b9c4-8d83ed6ff32a", 00:07:29.341 "is_configured": true, 00:07:29.341 "data_offset": 0, 00:07:29.341 "data_size": 65536 00:07:29.341 } 00:07:29.341 ] 00:07:29.341 }' 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.341 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.909 [2024-11-20 16:59:53.621721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:29.909 [2024-11-20 16:59:53.621878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.909 [2024-11-20 16:59:53.703930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.909 [2024-11-20 16:59:53.704023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.909 [2024-11-20 16:59:53.704045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62478 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62478 ']' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62478 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.909 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62478 00:07:30.168 killing process with pid 62478 00:07:30.168 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.168 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.168 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62478' 00:07:30.168 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62478 00:07:30.168 [2024-11-20 16:59:53.796490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.168 16:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62478 00:07:30.168 [2024-11-20 16:59:53.809692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.106 00:07:31.106 real 0m5.435s 00:07:31.106 user 0m8.270s 00:07:31.106 sys 0m0.788s 00:07:31.106 ************************************ 00:07:31.106 END TEST raid_state_function_test 00:07:31.106 ************************************ 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.106 16:59:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:31.106 16:59:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.106 16:59:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.106 16:59:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.106 ************************************ 00:07:31.106 START TEST raid_state_function_test_sb 00:07:31.106 ************************************ 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:31.106 Process raid pid: 62737 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62737 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62737' 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62737 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62737 ']' 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.106 16:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.106 [2024-11-20 16:59:54.940504] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:31.106 [2024-11-20 16:59:54.940678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.365 [2024-11-20 16:59:55.123715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.659 [2024-11-20 16:59:55.257585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.659 [2024-11-20 16:59:55.461902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.659 [2024-11-20 16:59:55.462143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.237 [2024-11-20 16:59:55.955697] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.237 [2024-11-20 16:59:55.955767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.237 [2024-11-20 16:59:55.955786] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.237 [2024-11-20 16:59:55.955803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.237 16:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.237 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.237 "name": "Existed_Raid", 00:07:32.237 "uuid": "69a2ab14-ad59-46a6-a2d6-4a8a3bd6480c", 00:07:32.237 "strip_size_kb": 0, 00:07:32.237 "state": "configuring", 00:07:32.237 "raid_level": "raid1", 00:07:32.237 "superblock": true, 00:07:32.237 "num_base_bdevs": 2, 00:07:32.237 "num_base_bdevs_discovered": 0, 00:07:32.237 "num_base_bdevs_operational": 2, 00:07:32.237 "base_bdevs_list": [ 00:07:32.237 { 00:07:32.237 "name": "BaseBdev1", 00:07:32.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.237 "is_configured": false, 00:07:32.237 "data_offset": 0, 00:07:32.237 "data_size": 0 00:07:32.237 }, 00:07:32.237 { 00:07:32.237 "name": "BaseBdev2", 00:07:32.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.237 "is_configured": false, 00:07:32.237 "data_offset": 0, 00:07:32.237 "data_size": 0 00:07:32.237 } 00:07:32.237 ] 00:07:32.237 }' 00:07:32.237 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.237 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.805 [2024-11-20 16:59:56.487890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.805 [2024-11-20 16:59:56.487932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.805 [2024-11-20 16:59:56.495870] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.805 [2024-11-20 16:59:56.495933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.805 [2024-11-20 16:59:56.495948] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.805 [2024-11-20 16:59:56.495967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.805 [2024-11-20 16:59:56.540896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.805 BaseBdev1 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:32.805 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.806 [ 00:07:32.806 { 00:07:32.806 "name": "BaseBdev1", 00:07:32.806 "aliases": [ 00:07:32.806 "b7156142-84d7-49e7-b6d4-72aeef262b2b" 00:07:32.806 ], 00:07:32.806 "product_name": "Malloc disk", 00:07:32.806 "block_size": 512, 00:07:32.806 "num_blocks": 65536, 00:07:32.806 "uuid": "b7156142-84d7-49e7-b6d4-72aeef262b2b", 00:07:32.806 "assigned_rate_limits": { 00:07:32.806 "rw_ios_per_sec": 0, 00:07:32.806 "rw_mbytes_per_sec": 0, 00:07:32.806 "r_mbytes_per_sec": 0, 00:07:32.806 "w_mbytes_per_sec": 0 00:07:32.806 }, 00:07:32.806 "claimed": true, 00:07:32.806 "claim_type": "exclusive_write", 00:07:32.806 "zoned": false, 00:07:32.806 "supported_io_types": { 00:07:32.806 "read": true, 00:07:32.806 "write": true, 00:07:32.806 "unmap": true, 00:07:32.806 "flush": true, 00:07:32.806 "reset": true, 00:07:32.806 "nvme_admin": false, 00:07:32.806 "nvme_io": false, 00:07:32.806 "nvme_io_md": false, 00:07:32.806 "write_zeroes": true, 00:07:32.806 "zcopy": true, 00:07:32.806 "get_zone_info": false, 00:07:32.806 "zone_management": false, 00:07:32.806 "zone_append": false, 00:07:32.806 "compare": false, 00:07:32.806 "compare_and_write": false, 00:07:32.806 "abort": true, 00:07:32.806 "seek_hole": false, 00:07:32.806 "seek_data": false, 00:07:32.806 "copy": true, 00:07:32.806 "nvme_iov_md": false 00:07:32.806 }, 00:07:32.806 "memory_domains": [ 00:07:32.806 { 00:07:32.806 "dma_device_id": "system", 00:07:32.806 "dma_device_type": 1 00:07:32.806 }, 00:07:32.806 { 00:07:32.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.806 "dma_device_type": 2 00:07:32.806 } 00:07:32.806 ], 00:07:32.806 "driver_specific": {} 00:07:32.806 } 00:07:32.806 ] 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.806 "name": "Existed_Raid", 00:07:32.806 "uuid": "32c00c30-e5a8-4b58-a4b2-2368fab28ffb", 00:07:32.806 "strip_size_kb": 0, 00:07:32.806 "state": "configuring", 00:07:32.806 "raid_level": "raid1", 00:07:32.806 "superblock": true, 00:07:32.806 "num_base_bdevs": 2, 00:07:32.806 "num_base_bdevs_discovered": 1, 00:07:32.806 "num_base_bdevs_operational": 2, 00:07:32.806 "base_bdevs_list": [ 00:07:32.806 { 00:07:32.806 "name": "BaseBdev1", 00:07:32.806 "uuid": "b7156142-84d7-49e7-b6d4-72aeef262b2b", 00:07:32.806 "is_configured": true, 00:07:32.806 "data_offset": 2048, 00:07:32.806 "data_size": 63488 00:07:32.806 }, 00:07:32.806 { 00:07:32.806 "name": "BaseBdev2", 00:07:32.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.806 "is_configured": false, 00:07:32.806 "data_offset": 0, 00:07:32.806 "data_size": 0 00:07:32.806 } 00:07:32.806 ] 00:07:32.806 }' 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.806 16:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.373 [2024-11-20 16:59:57.081089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.373 [2024-11-20 16:59:57.081147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.373 [2024-11-20 16:59:57.089105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.373 [2024-11-20 16:59:57.091705] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.373 [2024-11-20 16:59:57.091947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.373 "name": "Existed_Raid", 00:07:33.373 "uuid": "a5be51a6-e469-4460-9a1f-2cbedc5acb72", 00:07:33.373 "strip_size_kb": 0, 00:07:33.373 "state": "configuring", 00:07:33.373 "raid_level": "raid1", 00:07:33.373 "superblock": true, 00:07:33.373 "num_base_bdevs": 2, 00:07:33.373 "num_base_bdevs_discovered": 1, 00:07:33.373 "num_base_bdevs_operational": 2, 00:07:33.373 "base_bdevs_list": [ 00:07:33.373 { 00:07:33.373 "name": "BaseBdev1", 00:07:33.373 "uuid": "b7156142-84d7-49e7-b6d4-72aeef262b2b", 00:07:33.373 "is_configured": true, 00:07:33.373 "data_offset": 2048, 00:07:33.373 "data_size": 63488 00:07:33.373 }, 00:07:33.373 { 00:07:33.373 "name": "BaseBdev2", 00:07:33.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.373 "is_configured": false, 00:07:33.373 "data_offset": 0, 00:07:33.373 "data_size": 0 00:07:33.373 } 00:07:33.373 ] 00:07:33.373 }' 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.373 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.940 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.940 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.940 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.940 [2024-11-20 16:59:57.676439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.940 [2024-11-20 16:59:57.676743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.940 [2024-11-20 16:59:57.676763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:33.940 BaseBdev2 00:07:33.940 [2024-11-20 16:59:57.677314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.941 [2024-11-20 16:59:57.677646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.941 [2024-11-20 16:59:57.677672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:33.941 [2024-11-20 16:59:57.677874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.941 [ 00:07:33.941 { 00:07:33.941 "name": "BaseBdev2", 00:07:33.941 "aliases": [ 00:07:33.941 "52db77a1-cc0a-4d3d-9283-1e119df237ea" 00:07:33.941 ], 00:07:33.941 "product_name": "Malloc disk", 00:07:33.941 "block_size": 512, 00:07:33.941 "num_blocks": 65536, 00:07:33.941 "uuid": "52db77a1-cc0a-4d3d-9283-1e119df237ea", 00:07:33.941 "assigned_rate_limits": { 00:07:33.941 "rw_ios_per_sec": 0, 00:07:33.941 "rw_mbytes_per_sec": 0, 00:07:33.941 "r_mbytes_per_sec": 0, 00:07:33.941 "w_mbytes_per_sec": 0 00:07:33.941 }, 00:07:33.941 "claimed": true, 00:07:33.941 "claim_type": "exclusive_write", 00:07:33.941 "zoned": false, 00:07:33.941 "supported_io_types": { 00:07:33.941 "read": true, 00:07:33.941 "write": true, 00:07:33.941 "unmap": true, 00:07:33.941 "flush": true, 00:07:33.941 "reset": true, 00:07:33.941 "nvme_admin": false, 00:07:33.941 "nvme_io": false, 00:07:33.941 "nvme_io_md": false, 00:07:33.941 "write_zeroes": true, 00:07:33.941 "zcopy": true, 00:07:33.941 "get_zone_info": false, 00:07:33.941 "zone_management": false, 00:07:33.941 "zone_append": false, 00:07:33.941 "compare": false, 00:07:33.941 "compare_and_write": false, 00:07:33.941 "abort": true, 00:07:33.941 "seek_hole": false, 00:07:33.941 "seek_data": false, 00:07:33.941 "copy": true, 00:07:33.941 "nvme_iov_md": false 00:07:33.941 }, 00:07:33.941 "memory_domains": [ 00:07:33.941 { 00:07:33.941 "dma_device_id": "system", 00:07:33.941 "dma_device_type": 1 00:07:33.941 }, 00:07:33.941 { 00:07:33.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.941 "dma_device_type": 2 00:07:33.941 } 00:07:33.941 ], 00:07:33.941 "driver_specific": {} 00:07:33.941 } 00:07:33.941 ] 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.941 "name": "Existed_Raid", 00:07:33.941 "uuid": "a5be51a6-e469-4460-9a1f-2cbedc5acb72", 00:07:33.941 "strip_size_kb": 0, 00:07:33.941 "state": "online", 00:07:33.941 "raid_level": "raid1", 00:07:33.941 "superblock": true, 00:07:33.941 "num_base_bdevs": 2, 00:07:33.941 "num_base_bdevs_discovered": 2, 00:07:33.941 "num_base_bdevs_operational": 2, 00:07:33.941 "base_bdevs_list": [ 00:07:33.941 { 00:07:33.941 "name": "BaseBdev1", 00:07:33.941 "uuid": "b7156142-84d7-49e7-b6d4-72aeef262b2b", 00:07:33.941 "is_configured": true, 00:07:33.941 "data_offset": 2048, 00:07:33.941 "data_size": 63488 00:07:33.941 }, 00:07:33.941 { 00:07:33.941 "name": "BaseBdev2", 00:07:33.941 "uuid": "52db77a1-cc0a-4d3d-9283-1e119df237ea", 00:07:33.941 "is_configured": true, 00:07:33.941 "data_offset": 2048, 00:07:33.941 "data_size": 63488 00:07:33.941 } 00:07:33.941 ] 00:07:33.941 }' 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.941 16:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.507 [2024-11-20 16:59:58.237023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.507 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.507 "name": "Existed_Raid", 00:07:34.507 "aliases": [ 00:07:34.507 "a5be51a6-e469-4460-9a1f-2cbedc5acb72" 00:07:34.507 ], 00:07:34.507 "product_name": "Raid Volume", 00:07:34.507 "block_size": 512, 00:07:34.507 "num_blocks": 63488, 00:07:34.507 "uuid": "a5be51a6-e469-4460-9a1f-2cbedc5acb72", 00:07:34.507 "assigned_rate_limits": { 00:07:34.507 "rw_ios_per_sec": 0, 00:07:34.507 "rw_mbytes_per_sec": 0, 00:07:34.507 "r_mbytes_per_sec": 0, 00:07:34.507 "w_mbytes_per_sec": 0 00:07:34.507 }, 00:07:34.507 "claimed": false, 00:07:34.507 "zoned": false, 00:07:34.507 "supported_io_types": { 00:07:34.507 "read": true, 00:07:34.507 "write": true, 00:07:34.507 "unmap": false, 00:07:34.507 "flush": false, 00:07:34.507 "reset": true, 00:07:34.507 "nvme_admin": false, 00:07:34.507 "nvme_io": false, 00:07:34.507 "nvme_io_md": false, 00:07:34.507 "write_zeroes": true, 00:07:34.507 "zcopy": false, 00:07:34.507 "get_zone_info": false, 00:07:34.507 "zone_management": false, 00:07:34.507 "zone_append": false, 00:07:34.507 "compare": false, 00:07:34.507 "compare_and_write": false, 00:07:34.507 "abort": false, 00:07:34.507 "seek_hole": false, 00:07:34.507 "seek_data": false, 00:07:34.507 "copy": false, 00:07:34.507 "nvme_iov_md": false 00:07:34.507 }, 00:07:34.507 "memory_domains": [ 00:07:34.507 { 00:07:34.507 "dma_device_id": "system", 00:07:34.507 "dma_device_type": 1 00:07:34.507 }, 00:07:34.507 { 00:07:34.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.507 "dma_device_type": 2 00:07:34.507 }, 00:07:34.507 { 00:07:34.507 "dma_device_id": "system", 00:07:34.508 "dma_device_type": 1 00:07:34.508 }, 00:07:34.508 { 00:07:34.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.508 "dma_device_type": 2 00:07:34.508 } 00:07:34.508 ], 00:07:34.508 "driver_specific": { 00:07:34.508 "raid": { 00:07:34.508 "uuid": "a5be51a6-e469-4460-9a1f-2cbedc5acb72", 00:07:34.508 "strip_size_kb": 0, 00:07:34.508 "state": "online", 00:07:34.508 "raid_level": "raid1", 00:07:34.508 "superblock": true, 00:07:34.508 "num_base_bdevs": 2, 00:07:34.508 "num_base_bdevs_discovered": 2, 00:07:34.508 "num_base_bdevs_operational": 2, 00:07:34.508 "base_bdevs_list": [ 00:07:34.508 { 00:07:34.508 "name": "BaseBdev1", 00:07:34.508 "uuid": "b7156142-84d7-49e7-b6d4-72aeef262b2b", 00:07:34.508 "is_configured": true, 00:07:34.508 "data_offset": 2048, 00:07:34.508 "data_size": 63488 00:07:34.508 }, 00:07:34.508 { 00:07:34.508 "name": "BaseBdev2", 00:07:34.508 "uuid": "52db77a1-cc0a-4d3d-9283-1e119df237ea", 00:07:34.508 "is_configured": true, 00:07:34.508 "data_offset": 2048, 00:07:34.508 "data_size": 63488 00:07:34.508 } 00:07:34.508 ] 00:07:34.508 } 00:07:34.508 } 00:07:34.508 }' 00:07:34.508 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.508 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.508 BaseBdev2' 00:07:34.508 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.767 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.768 [2024-11-20 16:59:58.500700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.768 "name": "Existed_Raid", 00:07:34.768 "uuid": "a5be51a6-e469-4460-9a1f-2cbedc5acb72", 00:07:34.768 "strip_size_kb": 0, 00:07:34.768 "state": "online", 00:07:34.768 "raid_level": "raid1", 00:07:34.768 "superblock": true, 00:07:34.768 "num_base_bdevs": 2, 00:07:34.768 "num_base_bdevs_discovered": 1, 00:07:34.768 "num_base_bdevs_operational": 1, 00:07:34.768 "base_bdevs_list": [ 00:07:34.768 { 00:07:34.768 "name": null, 00:07:34.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.768 "is_configured": false, 00:07:34.768 "data_offset": 0, 00:07:34.768 "data_size": 63488 00:07:34.768 }, 00:07:34.768 { 00:07:34.768 "name": "BaseBdev2", 00:07:34.768 "uuid": "52db77a1-cc0a-4d3d-9283-1e119df237ea", 00:07:34.768 "is_configured": true, 00:07:34.768 "data_offset": 2048, 00:07:34.768 "data_size": 63488 00:07:34.768 } 00:07:34.768 ] 00:07:34.768 }' 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.768 16:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.336 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.336 [2024-11-20 16:59:59.155416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:35.336 [2024-11-20 16:59:59.155548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.594 [2024-11-20 16:59:59.240242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.594 [2024-11-20 16:59:59.240322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.594 [2024-11-20 16:59:59.240344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62737 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62737 ']' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62737 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62737 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.594 killing process with pid 62737 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62737' 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62737 00:07:35.594 [2024-11-20 16:59:59.330130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.594 16:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62737 00:07:35.594 [2024-11-20 16:59:59.345104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.530 ************************************ 00:07:36.530 END TEST raid_state_function_test_sb 00:07:36.530 ************************************ 00:07:36.530 17:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.530 00:07:36.530 real 0m5.520s 00:07:36.530 user 0m8.398s 00:07:36.530 sys 0m0.784s 00:07:36.530 17:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.530 17:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.530 17:00:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:36.530 17:00:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:36.530 17:00:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.530 17:00:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.790 ************************************ 00:07:36.790 START TEST raid_superblock_test 00:07:36.790 ************************************ 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62993 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62993 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62993 ']' 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.790 17:00:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.790 [2024-11-20 17:00:00.516321] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:36.790 [2024-11-20 17:00:00.516498] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62993 ] 00:07:37.050 [2024-11-20 17:00:00.698537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.050 [2024-11-20 17:00:00.827373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.309 [2024-11-20 17:00:01.025976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.309 [2024-11-20 17:00:01.026025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 malloc1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 [2024-11-20 17:00:01.515270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.877 [2024-11-20 17:00:01.515505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.877 [2024-11-20 17:00:01.515559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:37.877 [2024-11-20 17:00:01.515575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.877 [2024-11-20 17:00:01.518543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.877 [2024-11-20 17:00:01.518752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.877 pt1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 malloc2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 [2024-11-20 17:00:01.573612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.877 [2024-11-20 17:00:01.573689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.877 [2024-11-20 17:00:01.573724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:37.877 [2024-11-20 17:00:01.573738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.877 [2024-11-20 17:00:01.576678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.877 [2024-11-20 17:00:01.576916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.877 pt2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 [2024-11-20 17:00:01.585722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.877 [2024-11-20 17:00:01.588080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.877 [2024-11-20 17:00:01.588421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:37.877 [2024-11-20 17:00:01.588452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:37.877 [2024-11-20 17:00:01.588795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.877 [2024-11-20 17:00:01.588996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:37.877 [2024-11-20 17:00:01.589021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:37.877 [2024-11-20 17:00:01.589201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.877 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.877 "name": "raid_bdev1", 00:07:37.877 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:37.877 "strip_size_kb": 0, 00:07:37.877 "state": "online", 00:07:37.877 "raid_level": "raid1", 00:07:37.877 "superblock": true, 00:07:37.877 "num_base_bdevs": 2, 00:07:37.877 "num_base_bdevs_discovered": 2, 00:07:37.877 "num_base_bdevs_operational": 2, 00:07:37.877 "base_bdevs_list": [ 00:07:37.877 { 00:07:37.877 "name": "pt1", 00:07:37.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.877 "is_configured": true, 00:07:37.877 "data_offset": 2048, 00:07:37.877 "data_size": 63488 00:07:37.877 }, 00:07:37.877 { 00:07:37.878 "name": "pt2", 00:07:37.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.878 "is_configured": true, 00:07:37.878 "data_offset": 2048, 00:07:37.878 "data_size": 63488 00:07:37.878 } 00:07:37.878 ] 00:07:37.878 }' 00:07:37.878 17:00:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.878 17:00:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.446 [2024-11-20 17:00:02.094269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.446 "name": "raid_bdev1", 00:07:38.446 "aliases": [ 00:07:38.446 "6510195d-8524-460e-8277-e1eedc2a418a" 00:07:38.446 ], 00:07:38.446 "product_name": "Raid Volume", 00:07:38.446 "block_size": 512, 00:07:38.446 "num_blocks": 63488, 00:07:38.446 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:38.446 "assigned_rate_limits": { 00:07:38.446 "rw_ios_per_sec": 0, 00:07:38.446 "rw_mbytes_per_sec": 0, 00:07:38.446 "r_mbytes_per_sec": 0, 00:07:38.446 "w_mbytes_per_sec": 0 00:07:38.446 }, 00:07:38.446 "claimed": false, 00:07:38.446 "zoned": false, 00:07:38.446 "supported_io_types": { 00:07:38.446 "read": true, 00:07:38.446 "write": true, 00:07:38.446 "unmap": false, 00:07:38.446 "flush": false, 00:07:38.446 "reset": true, 00:07:38.446 "nvme_admin": false, 00:07:38.446 "nvme_io": false, 00:07:38.446 "nvme_io_md": false, 00:07:38.446 "write_zeroes": true, 00:07:38.446 "zcopy": false, 00:07:38.446 "get_zone_info": false, 00:07:38.446 "zone_management": false, 00:07:38.446 "zone_append": false, 00:07:38.446 "compare": false, 00:07:38.446 "compare_and_write": false, 00:07:38.446 "abort": false, 00:07:38.446 "seek_hole": false, 00:07:38.446 "seek_data": false, 00:07:38.446 "copy": false, 00:07:38.446 "nvme_iov_md": false 00:07:38.446 }, 00:07:38.446 "memory_domains": [ 00:07:38.446 { 00:07:38.446 "dma_device_id": "system", 00:07:38.446 "dma_device_type": 1 00:07:38.446 }, 00:07:38.446 { 00:07:38.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.446 "dma_device_type": 2 00:07:38.446 }, 00:07:38.446 { 00:07:38.446 "dma_device_id": "system", 00:07:38.446 "dma_device_type": 1 00:07:38.446 }, 00:07:38.446 { 00:07:38.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.446 "dma_device_type": 2 00:07:38.446 } 00:07:38.446 ], 00:07:38.446 "driver_specific": { 00:07:38.446 "raid": { 00:07:38.446 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:38.446 "strip_size_kb": 0, 00:07:38.446 "state": "online", 00:07:38.446 "raid_level": "raid1", 00:07:38.446 "superblock": true, 00:07:38.446 "num_base_bdevs": 2, 00:07:38.446 "num_base_bdevs_discovered": 2, 00:07:38.446 "num_base_bdevs_operational": 2, 00:07:38.446 "base_bdevs_list": [ 00:07:38.446 { 00:07:38.446 "name": "pt1", 00:07:38.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.446 "is_configured": true, 00:07:38.446 "data_offset": 2048, 00:07:38.446 "data_size": 63488 00:07:38.446 }, 00:07:38.446 { 00:07:38.446 "name": "pt2", 00:07:38.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.446 "is_configured": true, 00:07:38.446 "data_offset": 2048, 00:07:38.446 "data_size": 63488 00:07:38.446 } 00:07:38.446 ] 00:07:38.446 } 00:07:38.446 } 00:07:38.446 }' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:38.446 pt2' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.446 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 [2024-11-20 17:00:02.358217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6510195d-8524-460e-8277-e1eedc2a418a 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6510195d-8524-460e-8277-e1eedc2a418a ']' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 [2024-11-20 17:00:02.405893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.706 [2024-11-20 17:00:02.405920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.706 [2024-11-20 17:00:02.406008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.706 [2024-11-20 17:00:02.406079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.706 [2024-11-20 17:00:02.406098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 [2024-11-20 17:00:02.533970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:38.706 [2024-11-20 17:00:02.536474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:38.706 [2024-11-20 17:00:02.536582] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:38.706 [2024-11-20 17:00:02.536669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:38.706 [2024-11-20 17:00:02.536695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.706 [2024-11-20 17:00:02.536710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:38.706 request: 00:07:38.706 { 00:07:38.706 "name": "raid_bdev1", 00:07:38.706 "raid_level": "raid1", 00:07:38.706 "base_bdevs": [ 00:07:38.706 "malloc1", 00:07:38.706 "malloc2" 00:07:38.706 ], 00:07:38.706 "superblock": false, 00:07:38.706 "method": "bdev_raid_create", 00:07:38.706 "req_id": 1 00:07:38.706 } 00:07:38.706 Got JSON-RPC error response 00:07:38.706 response: 00:07:38.706 { 00:07:38.706 "code": -17, 00:07:38.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:38.706 } 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:38.706 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.966 [2024-11-20 17:00:02.593975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.966 [2024-11-20 17:00:02.594041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.966 [2024-11-20 17:00:02.594072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:38.966 [2024-11-20 17:00:02.594090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.966 [2024-11-20 17:00:02.596903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.966 [2024-11-20 17:00:02.597073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.966 [2024-11-20 17:00:02.597184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:38.966 [2024-11-20 17:00:02.597256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:38.966 pt1 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.966 "name": "raid_bdev1", 00:07:38.966 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:38.966 "strip_size_kb": 0, 00:07:38.966 "state": "configuring", 00:07:38.966 "raid_level": "raid1", 00:07:38.966 "superblock": true, 00:07:38.966 "num_base_bdevs": 2, 00:07:38.966 "num_base_bdevs_discovered": 1, 00:07:38.966 "num_base_bdevs_operational": 2, 00:07:38.966 "base_bdevs_list": [ 00:07:38.966 { 00:07:38.966 "name": "pt1", 00:07:38.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.966 "is_configured": true, 00:07:38.966 "data_offset": 2048, 00:07:38.966 "data_size": 63488 00:07:38.966 }, 00:07:38.966 { 00:07:38.966 "name": null, 00:07:38.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.966 "is_configured": false, 00:07:38.966 "data_offset": 2048, 00:07:38.966 "data_size": 63488 00:07:38.966 } 00:07:38.966 ] 00:07:38.966 }' 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.966 17:00:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.533 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.533 [2024-11-20 17:00:03.138170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:39.534 [2024-11-20 17:00:03.138265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.534 [2024-11-20 17:00:03.138295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:39.534 [2024-11-20 17:00:03.138311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.534 [2024-11-20 17:00:03.138903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.534 [2024-11-20 17:00:03.138940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:39.534 [2024-11-20 17:00:03.139037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:39.534 [2024-11-20 17:00:03.139076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:39.534 [2024-11-20 17:00:03.139218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.534 [2024-11-20 17:00:03.139239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:39.534 [2024-11-20 17:00:03.139562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:39.534 [2024-11-20 17:00:03.139778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.534 [2024-11-20 17:00:03.139794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.534 [2024-11-20 17:00:03.139962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.534 pt2 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.534 "name": "raid_bdev1", 00:07:39.534 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:39.534 "strip_size_kb": 0, 00:07:39.534 "state": "online", 00:07:39.534 "raid_level": "raid1", 00:07:39.534 "superblock": true, 00:07:39.534 "num_base_bdevs": 2, 00:07:39.534 "num_base_bdevs_discovered": 2, 00:07:39.534 "num_base_bdevs_operational": 2, 00:07:39.534 "base_bdevs_list": [ 00:07:39.534 { 00:07:39.534 "name": "pt1", 00:07:39.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.534 "is_configured": true, 00:07:39.534 "data_offset": 2048, 00:07:39.534 "data_size": 63488 00:07:39.534 }, 00:07:39.534 { 00:07:39.534 "name": "pt2", 00:07:39.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.534 "is_configured": true, 00:07:39.534 "data_offset": 2048, 00:07:39.534 "data_size": 63488 00:07:39.534 } 00:07:39.534 ] 00:07:39.534 }' 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.534 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.102 [2024-11-20 17:00:03.670677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.102 "name": "raid_bdev1", 00:07:40.102 "aliases": [ 00:07:40.102 "6510195d-8524-460e-8277-e1eedc2a418a" 00:07:40.102 ], 00:07:40.102 "product_name": "Raid Volume", 00:07:40.102 "block_size": 512, 00:07:40.102 "num_blocks": 63488, 00:07:40.102 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:40.102 "assigned_rate_limits": { 00:07:40.102 "rw_ios_per_sec": 0, 00:07:40.102 "rw_mbytes_per_sec": 0, 00:07:40.102 "r_mbytes_per_sec": 0, 00:07:40.102 "w_mbytes_per_sec": 0 00:07:40.102 }, 00:07:40.102 "claimed": false, 00:07:40.102 "zoned": false, 00:07:40.102 "supported_io_types": { 00:07:40.102 "read": true, 00:07:40.102 "write": true, 00:07:40.102 "unmap": false, 00:07:40.102 "flush": false, 00:07:40.102 "reset": true, 00:07:40.102 "nvme_admin": false, 00:07:40.102 "nvme_io": false, 00:07:40.102 "nvme_io_md": false, 00:07:40.102 "write_zeroes": true, 00:07:40.102 "zcopy": false, 00:07:40.102 "get_zone_info": false, 00:07:40.102 "zone_management": false, 00:07:40.102 "zone_append": false, 00:07:40.102 "compare": false, 00:07:40.102 "compare_and_write": false, 00:07:40.102 "abort": false, 00:07:40.102 "seek_hole": false, 00:07:40.102 "seek_data": false, 00:07:40.102 "copy": false, 00:07:40.102 "nvme_iov_md": false 00:07:40.102 }, 00:07:40.102 "memory_domains": [ 00:07:40.102 { 00:07:40.102 "dma_device_id": "system", 00:07:40.102 "dma_device_type": 1 00:07:40.102 }, 00:07:40.102 { 00:07:40.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.102 "dma_device_type": 2 00:07:40.102 }, 00:07:40.102 { 00:07:40.102 "dma_device_id": "system", 00:07:40.102 "dma_device_type": 1 00:07:40.102 }, 00:07:40.102 { 00:07:40.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.102 "dma_device_type": 2 00:07:40.102 } 00:07:40.102 ], 00:07:40.102 "driver_specific": { 00:07:40.102 "raid": { 00:07:40.102 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:40.102 "strip_size_kb": 0, 00:07:40.102 "state": "online", 00:07:40.102 "raid_level": "raid1", 00:07:40.102 "superblock": true, 00:07:40.102 "num_base_bdevs": 2, 00:07:40.102 "num_base_bdevs_discovered": 2, 00:07:40.102 "num_base_bdevs_operational": 2, 00:07:40.102 "base_bdevs_list": [ 00:07:40.102 { 00:07:40.102 "name": "pt1", 00:07:40.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.102 "is_configured": true, 00:07:40.102 "data_offset": 2048, 00:07:40.102 "data_size": 63488 00:07:40.102 }, 00:07:40.102 { 00:07:40.102 "name": "pt2", 00:07:40.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.102 "is_configured": true, 00:07:40.102 "data_offset": 2048, 00:07:40.102 "data_size": 63488 00:07:40.102 } 00:07:40.102 ] 00:07:40.102 } 00:07:40.102 } 00:07:40.102 }' 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:40.102 pt2' 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.102 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.103 [2024-11-20 17:00:03.922718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6510195d-8524-460e-8277-e1eedc2a418a '!=' 6510195d-8524-460e-8277-e1eedc2a418a ']' 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:40.103 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.361 [2024-11-20 17:00:03.974506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.361 17:00:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.361 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.361 "name": "raid_bdev1", 00:07:40.361 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:40.361 "strip_size_kb": 0, 00:07:40.361 "state": "online", 00:07:40.361 "raid_level": "raid1", 00:07:40.361 "superblock": true, 00:07:40.361 "num_base_bdevs": 2, 00:07:40.361 "num_base_bdevs_discovered": 1, 00:07:40.361 "num_base_bdevs_operational": 1, 00:07:40.361 "base_bdevs_list": [ 00:07:40.361 { 00:07:40.361 "name": null, 00:07:40.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.361 "is_configured": false, 00:07:40.361 "data_offset": 0, 00:07:40.361 "data_size": 63488 00:07:40.361 }, 00:07:40.361 { 00:07:40.361 "name": "pt2", 00:07:40.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.361 "is_configured": true, 00:07:40.361 "data_offset": 2048, 00:07:40.361 "data_size": 63488 00:07:40.361 } 00:07:40.361 ] 00:07:40.361 }' 00:07:40.361 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.361 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.928 [2024-11-20 17:00:04.494630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.928 [2024-11-20 17:00:04.494662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.928 [2024-11-20 17:00:04.494747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.928 [2024-11-20 17:00:04.494838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.928 [2024-11-20 17:00:04.494859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.928 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.929 [2024-11-20 17:00:04.566621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.929 [2024-11-20 17:00:04.566864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.929 [2024-11-20 17:00:04.566910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:40.929 [2024-11-20 17:00:04.566928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.929 [2024-11-20 17:00:04.569750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.929 [2024-11-20 17:00:04.569820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.929 [2024-11-20 17:00:04.569930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.929 [2024-11-20 17:00:04.569995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.929 [2024-11-20 17:00:04.570121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:40.929 [2024-11-20 17:00:04.570148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:40.929 [2024-11-20 17:00:04.570454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:40.929 [2024-11-20 17:00:04.570639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:40.929 [2024-11-20 17:00:04.570654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:40.929 [2024-11-20 17:00:04.570919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.929 pt2 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.929 "name": "raid_bdev1", 00:07:40.929 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:40.929 "strip_size_kb": 0, 00:07:40.929 "state": "online", 00:07:40.929 "raid_level": "raid1", 00:07:40.929 "superblock": true, 00:07:40.929 "num_base_bdevs": 2, 00:07:40.929 "num_base_bdevs_discovered": 1, 00:07:40.929 "num_base_bdevs_operational": 1, 00:07:40.929 "base_bdevs_list": [ 00:07:40.929 { 00:07:40.929 "name": null, 00:07:40.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.929 "is_configured": false, 00:07:40.929 "data_offset": 2048, 00:07:40.929 "data_size": 63488 00:07:40.929 }, 00:07:40.929 { 00:07:40.929 "name": "pt2", 00:07:40.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.929 "is_configured": true, 00:07:40.929 "data_offset": 2048, 00:07:40.929 "data_size": 63488 00:07:40.929 } 00:07:40.929 ] 00:07:40.929 }' 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.929 17:00:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.496 [2024-11-20 17:00:05.066933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.496 [2024-11-20 17:00:05.067096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.496 [2024-11-20 17:00:05.067212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.496 [2024-11-20 17:00:05.067290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.496 [2024-11-20 17:00:05.067323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.496 [2024-11-20 17:00:05.126951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.496 [2024-11-20 17:00:05.127149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.496 [2024-11-20 17:00:05.127191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:41.496 [2024-11-20 17:00:05.127206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.496 [2024-11-20 17:00:05.130182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.496 [2024-11-20 17:00:05.130407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.496 [2024-11-20 17:00:05.130520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:41.496 [2024-11-20 17:00:05.130577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.496 [2024-11-20 17:00:05.130747] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:41.496 [2024-11-20 17:00:05.130782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.496 [2024-11-20 17:00:05.130805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:41.496 [2024-11-20 17:00:05.130868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.496 [2024-11-20 17:00:05.130967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:41.496 [2024-11-20 17:00:05.130983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:41.496 [2024-11-20 17:00:05.131336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:41.496 [2024-11-20 17:00:05.131519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:41.496 [2024-11-20 17:00:05.131540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:41.496 [2024-11-20 17:00:05.131807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.496 pt1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.496 "name": "raid_bdev1", 00:07:41.496 "uuid": "6510195d-8524-460e-8277-e1eedc2a418a", 00:07:41.496 "strip_size_kb": 0, 00:07:41.496 "state": "online", 00:07:41.496 "raid_level": "raid1", 00:07:41.496 "superblock": true, 00:07:41.496 "num_base_bdevs": 2, 00:07:41.496 "num_base_bdevs_discovered": 1, 00:07:41.496 "num_base_bdevs_operational": 1, 00:07:41.496 "base_bdevs_list": [ 00:07:41.496 { 00:07:41.496 "name": null, 00:07:41.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.496 "is_configured": false, 00:07:41.496 "data_offset": 2048, 00:07:41.496 "data_size": 63488 00:07:41.496 }, 00:07:41.496 { 00:07:41.496 "name": "pt2", 00:07:41.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.496 "is_configured": true, 00:07:41.496 "data_offset": 2048, 00:07:41.496 "data_size": 63488 00:07:41.496 } 00:07:41.496 ] 00:07:41.496 }' 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.496 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.062 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:42.063 [2024-11-20 17:00:05.695491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6510195d-8524-460e-8277-e1eedc2a418a '!=' 6510195d-8524-460e-8277-e1eedc2a418a ']' 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62993 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62993 ']' 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62993 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62993 00:07:42.063 killing process with pid 62993 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62993' 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62993 00:07:42.063 [2024-11-20 17:00:05.774311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.063 17:00:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62993 00:07:42.063 [2024-11-20 17:00:05.774415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.063 [2024-11-20 17:00:05.774476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.063 [2024-11-20 17:00:05.774500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:42.322 [2024-11-20 17:00:05.953153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.258 17:00:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:43.258 00:07:43.258 real 0m6.557s 00:07:43.258 user 0m10.431s 00:07:43.258 sys 0m0.887s 00:07:43.258 ************************************ 00:07:43.258 END TEST raid_superblock_test 00:07:43.258 ************************************ 00:07:43.258 17:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.258 17:00:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.258 17:00:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:43.258 17:00:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.258 17:00:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.258 17:00:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.258 ************************************ 00:07:43.258 START TEST raid_read_error_test 00:07:43.258 ************************************ 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZJVtEccZfz 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63330 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63330 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63330 ']' 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.258 17:00:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.516 [2024-11-20 17:00:07.126552] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:43.516 [2024-11-20 17:00:07.127016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63330 ] 00:07:43.516 [2024-11-20 17:00:07.313505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.773 [2024-11-20 17:00:07.435970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.773 [2024-11-20 17:00:07.629998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.773 [2024-11-20 17:00:07.630071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 BaseBdev1_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 true 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 [2024-11-20 17:00:08.134097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:44.340 [2024-11-20 17:00:08.134338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.340 [2024-11-20 17:00:08.134379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:44.340 [2024-11-20 17:00:08.134399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.340 [2024-11-20 17:00:08.137223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.340 [2024-11-20 17:00:08.137270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:44.340 BaseBdev1 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 BaseBdev2_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 true 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.340 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.340 [2024-11-20 17:00:08.188397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:44.340 [2024-11-20 17:00:08.188473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.341 [2024-11-20 17:00:08.188497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:44.341 [2024-11-20 17:00:08.188514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.341 [2024-11-20 17:00:08.191336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.341 [2024-11-20 17:00:08.191385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:44.341 BaseBdev2 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.341 [2024-11-20 17:00:08.196462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.341 [2024-11-20 17:00:08.199054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.341 [2024-11-20 17:00:08.199322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.341 [2024-11-20 17:00:08.199347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.341 [2024-11-20 17:00:08.199649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:44.341 [2024-11-20 17:00:08.200067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.341 [2024-11-20 17:00:08.200216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.341 [2024-11-20 17:00:08.200603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.341 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.600 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.600 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.600 "name": "raid_bdev1", 00:07:44.600 "uuid": "0f839bb3-d4c2-4090-9958-6233e8fd319e", 00:07:44.600 "strip_size_kb": 0, 00:07:44.600 "state": "online", 00:07:44.600 "raid_level": "raid1", 00:07:44.600 "superblock": true, 00:07:44.600 "num_base_bdevs": 2, 00:07:44.600 "num_base_bdevs_discovered": 2, 00:07:44.600 "num_base_bdevs_operational": 2, 00:07:44.600 "base_bdevs_list": [ 00:07:44.600 { 00:07:44.600 "name": "BaseBdev1", 00:07:44.600 "uuid": "4677b81d-71d6-5428-a68a-61db8aa5d383", 00:07:44.600 "is_configured": true, 00:07:44.600 "data_offset": 2048, 00:07:44.600 "data_size": 63488 00:07:44.600 }, 00:07:44.600 { 00:07:44.600 "name": "BaseBdev2", 00:07:44.600 "uuid": "5ce39030-65a1-5e6f-8e33-9da441ee1395", 00:07:44.600 "is_configured": true, 00:07:44.600 "data_offset": 2048, 00:07:44.600 "data_size": 63488 00:07:44.600 } 00:07:44.600 ] 00:07:44.600 }' 00:07:44.600 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.600 17:00:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.858 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.858 17:00:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:45.117 [2024-11-20 17:00:08.806008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.076 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.077 "name": "raid_bdev1", 00:07:46.077 "uuid": "0f839bb3-d4c2-4090-9958-6233e8fd319e", 00:07:46.077 "strip_size_kb": 0, 00:07:46.077 "state": "online", 00:07:46.077 "raid_level": "raid1", 00:07:46.077 "superblock": true, 00:07:46.077 "num_base_bdevs": 2, 00:07:46.077 "num_base_bdevs_discovered": 2, 00:07:46.077 "num_base_bdevs_operational": 2, 00:07:46.077 "base_bdevs_list": [ 00:07:46.077 { 00:07:46.077 "name": "BaseBdev1", 00:07:46.077 "uuid": "4677b81d-71d6-5428-a68a-61db8aa5d383", 00:07:46.077 "is_configured": true, 00:07:46.077 "data_offset": 2048, 00:07:46.077 "data_size": 63488 00:07:46.077 }, 00:07:46.077 { 00:07:46.077 "name": "BaseBdev2", 00:07:46.077 "uuid": "5ce39030-65a1-5e6f-8e33-9da441ee1395", 00:07:46.077 "is_configured": true, 00:07:46.077 "data_offset": 2048, 00:07:46.077 "data_size": 63488 00:07:46.077 } 00:07:46.077 ] 00:07:46.077 }' 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.077 17:00:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.335 17:00:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.336 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.336 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.594 [2024-11-20 17:00:10.206841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.594 [2024-11-20 17:00:10.206883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.594 [2024-11-20 17:00:10.210346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.594 [2024-11-20 17:00:10.210403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.594 [2024-11-20 17:00:10.210501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.595 [2024-11-20 17:00:10.210520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:46.595 { 00:07:46.595 "results": [ 00:07:46.595 { 00:07:46.595 "job": "raid_bdev1", 00:07:46.595 "core_mask": "0x1", 00:07:46.595 "workload": "randrw", 00:07:46.595 "percentage": 50, 00:07:46.595 "status": "finished", 00:07:46.595 "queue_depth": 1, 00:07:46.595 "io_size": 131072, 00:07:46.595 "runtime": 1.39815, 00:07:46.595 "iops": 13158.817008189393, 00:07:46.595 "mibps": 1644.8521260236741, 00:07:46.595 "io_failed": 0, 00:07:46.595 "io_timeout": 0, 00:07:46.595 "avg_latency_us": 71.92795738667247, 00:07:46.595 "min_latency_us": 38.86545454545455, 00:07:46.595 "max_latency_us": 1861.8181818181818 00:07:46.595 } 00:07:46.595 ], 00:07:46.595 "core_count": 1 00:07:46.595 } 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63330 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63330 ']' 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63330 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63330 00:07:46.595 killing process with pid 63330 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63330' 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63330 00:07:46.595 [2024-11-20 17:00:10.247719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.595 17:00:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63330 00:07:46.595 [2024-11-20 17:00:10.360993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZJVtEccZfz 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:47.972 00:07:47.972 real 0m4.421s 00:07:47.972 user 0m5.535s 00:07:47.972 sys 0m0.528s 00:07:47.972 ************************************ 00:07:47.972 END TEST raid_read_error_test 00:07:47.972 ************************************ 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.972 17:00:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.972 17:00:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:47.972 17:00:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.972 17:00:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.972 17:00:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.972 ************************************ 00:07:47.972 START TEST raid_write_error_test 00:07:47.972 ************************************ 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CMWGP0GDEq 00:07:47.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63470 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63470 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63470 ']' 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.972 17:00:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.973 [2024-11-20 17:00:11.599017] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:47.973 [2024-11-20 17:00:11.599464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63470 ] 00:07:47.973 [2024-11-20 17:00:11.784145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.231 [2024-11-20 17:00:11.911374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.489 [2024-11-20 17:00:12.113621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.489 [2024-11-20 17:00:12.113691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.748 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 BaseBdev1_malloc 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 true 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 [2024-11-20 17:00:12.637277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.006 [2024-11-20 17:00:12.637354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.006 [2024-11-20 17:00:12.637382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:49.006 [2024-11-20 17:00:12.637399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.006 [2024-11-20 17:00:12.640205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.006 [2024-11-20 17:00:12.640253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.006 BaseBdev1 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 BaseBdev2_malloc 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 true 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.006 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 [2024-11-20 17:00:12.692761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:49.007 [2024-11-20 17:00:12.692870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.007 [2024-11-20 17:00:12.692896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:49.007 [2024-11-20 17:00:12.692913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.007 [2024-11-20 17:00:12.695754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.007 [2024-11-20 17:00:12.695855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:49.007 BaseBdev2 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 [2024-11-20 17:00:12.700869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.007 [2024-11-20 17:00:12.703358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.007 [2024-11-20 17:00:12.703607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.007 [2024-11-20 17:00:12.703646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:49.007 [2024-11-20 17:00:12.703967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:49.007 [2024-11-20 17:00:12.704190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.007 [2024-11-20 17:00:12.704205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:49.007 [2024-11-20 17:00:12.704378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.007 "name": "raid_bdev1", 00:07:49.007 "uuid": "12fc4a5e-95b1-41ad-a240-18f103825d34", 00:07:49.007 "strip_size_kb": 0, 00:07:49.007 "state": "online", 00:07:49.007 "raid_level": "raid1", 00:07:49.007 "superblock": true, 00:07:49.007 "num_base_bdevs": 2, 00:07:49.007 "num_base_bdevs_discovered": 2, 00:07:49.007 "num_base_bdevs_operational": 2, 00:07:49.007 "base_bdevs_list": [ 00:07:49.007 { 00:07:49.007 "name": "BaseBdev1", 00:07:49.007 "uuid": "e83d88da-8794-53b5-a700-79a365766ea2", 00:07:49.007 "is_configured": true, 00:07:49.007 "data_offset": 2048, 00:07:49.007 "data_size": 63488 00:07:49.007 }, 00:07:49.007 { 00:07:49.007 "name": "BaseBdev2", 00:07:49.007 "uuid": "09ed5d2e-caaf-555c-8779-733852747c30", 00:07:49.007 "is_configured": true, 00:07:49.007 "data_offset": 2048, 00:07:49.007 "data_size": 63488 00:07:49.007 } 00:07:49.007 ] 00:07:49.007 }' 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.007 17:00:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.574 17:00:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:49.574 17:00:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:49.574 [2024-11-20 17:00:13.318390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.509 [2024-11-20 17:00:14.202250] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:50.509 [2024-11-20 17:00:14.202330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.509 [2024-11-20 17:00:14.202554] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.509 "name": "raid_bdev1", 00:07:50.509 "uuid": "12fc4a5e-95b1-41ad-a240-18f103825d34", 00:07:50.509 "strip_size_kb": 0, 00:07:50.509 "state": "online", 00:07:50.509 "raid_level": "raid1", 00:07:50.509 "superblock": true, 00:07:50.509 "num_base_bdevs": 2, 00:07:50.509 "num_base_bdevs_discovered": 1, 00:07:50.509 "num_base_bdevs_operational": 1, 00:07:50.509 "base_bdevs_list": [ 00:07:50.509 { 00:07:50.509 "name": null, 00:07:50.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.509 "is_configured": false, 00:07:50.509 "data_offset": 0, 00:07:50.509 "data_size": 63488 00:07:50.509 }, 00:07:50.509 { 00:07:50.509 "name": "BaseBdev2", 00:07:50.509 "uuid": "09ed5d2e-caaf-555c-8779-733852747c30", 00:07:50.509 "is_configured": true, 00:07:50.509 "data_offset": 2048, 00:07:50.509 "data_size": 63488 00:07:50.509 } 00:07:50.509 ] 00:07:50.509 }' 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.509 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.079 [2024-11-20 17:00:14.729981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.079 [2024-11-20 17:00:14.730014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.079 [2024-11-20 17:00:14.733524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.079 [2024-11-20 17:00:14.733718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.079 [2024-11-20 17:00:14.733873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.079 [2024-11-20 17:00:14.734110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.079 { 00:07:51.079 "results": [ 00:07:51.079 { 00:07:51.079 "job": "raid_bdev1", 00:07:51.079 "core_mask": "0x1", 00:07:51.079 "workload": "randrw", 00:07:51.079 "percentage": 50, 00:07:51.079 "status": "finished", 00:07:51.079 "queue_depth": 1, 00:07:51.079 "io_size": 131072, 00:07:51.079 "runtime": 1.408979, 00:07:51.079 "iops": 15642.53264243115, 00:07:51.079 "mibps": 1955.3165803038937, 00:07:51.079 "io_failed": 0, 00:07:51.079 "io_timeout": 0, 00:07:51.079 "avg_latency_us": 59.82158455700379, 00:07:51.079 "min_latency_us": 38.86545454545455, 00:07:51.079 "max_latency_us": 1832.0290909090909 00:07:51.079 } 00:07:51.079 ], 00:07:51.079 "core_count": 1 00:07:51.079 } 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63470 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63470 ']' 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63470 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63470 00:07:51.079 killing process with pid 63470 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63470' 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63470 00:07:51.079 [2024-11-20 17:00:14.771147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.079 17:00:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63470 00:07:51.079 [2024-11-20 17:00:14.882391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CMWGP0GDEq 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:52.456 00:07:52.456 real 0m4.472s 00:07:52.456 user 0m5.635s 00:07:52.456 sys 0m0.528s 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.456 17:00:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 ************************************ 00:07:52.456 END TEST raid_write_error_test 00:07:52.456 ************************************ 00:07:52.456 17:00:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:52.456 17:00:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:52.456 17:00:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:52.456 17:00:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.456 17:00:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.456 17:00:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 ************************************ 00:07:52.456 START TEST raid_state_function_test 00:07:52.456 ************************************ 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63608 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.456 Process raid pid: 63608 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63608' 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63608 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63608 ']' 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.456 17:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.456 [2024-11-20 17:00:16.124258] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:07:52.456 [2024-11-20 17:00:16.124629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.456 [2024-11-20 17:00:16.318587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.715 [2024-11-20 17:00:16.478526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.974 [2024-11-20 17:00:16.693017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.974 [2024-11-20 17:00:16.693069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.233 [2024-11-20 17:00:17.092848] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.233 [2024-11-20 17:00:17.092922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.233 [2024-11-20 17:00:17.092941] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.233 [2024-11-20 17:00:17.092958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.233 [2024-11-20 17:00:17.092970] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:53.233 [2024-11-20 17:00:17.092985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.233 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.492 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.492 "name": "Existed_Raid", 00:07:53.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.492 "strip_size_kb": 64, 00:07:53.492 "state": "configuring", 00:07:53.492 "raid_level": "raid0", 00:07:53.492 "superblock": false, 00:07:53.492 "num_base_bdevs": 3, 00:07:53.492 "num_base_bdevs_discovered": 0, 00:07:53.492 "num_base_bdevs_operational": 3, 00:07:53.492 "base_bdevs_list": [ 00:07:53.492 { 00:07:53.492 "name": "BaseBdev1", 00:07:53.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.492 "is_configured": false, 00:07:53.492 "data_offset": 0, 00:07:53.492 "data_size": 0 00:07:53.492 }, 00:07:53.492 { 00:07:53.492 "name": "BaseBdev2", 00:07:53.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.492 "is_configured": false, 00:07:53.492 "data_offset": 0, 00:07:53.492 "data_size": 0 00:07:53.492 }, 00:07:53.493 { 00:07:53.493 "name": "BaseBdev3", 00:07:53.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.493 "is_configured": false, 00:07:53.493 "data_offset": 0, 00:07:53.493 "data_size": 0 00:07:53.493 } 00:07:53.493 ] 00:07:53.493 }' 00:07:53.493 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.493 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 [2024-11-20 17:00:17.604991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.752 [2024-11-20 17:00:17.605035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.752 [2024-11-20 17:00:17.612972] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.752 [2024-11-20 17:00:17.613028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.752 [2024-11-20 17:00:17.613045] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.752 [2024-11-20 17:00:17.613061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.752 [2024-11-20 17:00:17.613072] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:53.752 [2024-11-20 17:00:17.613087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.752 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.011 [2024-11-20 17:00:17.657714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.011 BaseBdev1 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.011 [ 00:07:54.011 { 00:07:54.011 "name": "BaseBdev1", 00:07:54.011 "aliases": [ 00:07:54.011 "34c90b90-17ad-48c2-bebf-37d76db89cfa" 00:07:54.011 ], 00:07:54.011 "product_name": "Malloc disk", 00:07:54.011 "block_size": 512, 00:07:54.011 "num_blocks": 65536, 00:07:54.011 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:54.011 "assigned_rate_limits": { 00:07:54.011 "rw_ios_per_sec": 0, 00:07:54.011 "rw_mbytes_per_sec": 0, 00:07:54.011 "r_mbytes_per_sec": 0, 00:07:54.011 "w_mbytes_per_sec": 0 00:07:54.011 }, 00:07:54.011 "claimed": true, 00:07:54.011 "claim_type": "exclusive_write", 00:07:54.011 "zoned": false, 00:07:54.011 "supported_io_types": { 00:07:54.011 "read": true, 00:07:54.011 "write": true, 00:07:54.011 "unmap": true, 00:07:54.011 "flush": true, 00:07:54.011 "reset": true, 00:07:54.011 "nvme_admin": false, 00:07:54.011 "nvme_io": false, 00:07:54.011 "nvme_io_md": false, 00:07:54.011 "write_zeroes": true, 00:07:54.011 "zcopy": true, 00:07:54.011 "get_zone_info": false, 00:07:54.011 "zone_management": false, 00:07:54.011 "zone_append": false, 00:07:54.011 "compare": false, 00:07:54.011 "compare_and_write": false, 00:07:54.011 "abort": true, 00:07:54.011 "seek_hole": false, 00:07:54.011 "seek_data": false, 00:07:54.011 "copy": true, 00:07:54.011 "nvme_iov_md": false 00:07:54.011 }, 00:07:54.011 "memory_domains": [ 00:07:54.011 { 00:07:54.011 "dma_device_id": "system", 00:07:54.011 "dma_device_type": 1 00:07:54.011 }, 00:07:54.011 { 00:07:54.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.011 "dma_device_type": 2 00:07:54.011 } 00:07:54.011 ], 00:07:54.011 "driver_specific": {} 00:07:54.011 } 00:07:54.011 ] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.011 "name": "Existed_Raid", 00:07:54.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.011 "strip_size_kb": 64, 00:07:54.011 "state": "configuring", 00:07:54.011 "raid_level": "raid0", 00:07:54.011 "superblock": false, 00:07:54.011 "num_base_bdevs": 3, 00:07:54.011 "num_base_bdevs_discovered": 1, 00:07:54.011 "num_base_bdevs_operational": 3, 00:07:54.011 "base_bdevs_list": [ 00:07:54.011 { 00:07:54.011 "name": "BaseBdev1", 00:07:54.011 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:54.011 "is_configured": true, 00:07:54.011 "data_offset": 0, 00:07:54.011 "data_size": 65536 00:07:54.011 }, 00:07:54.011 { 00:07:54.011 "name": "BaseBdev2", 00:07:54.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.011 "is_configured": false, 00:07:54.011 "data_offset": 0, 00:07:54.011 "data_size": 0 00:07:54.011 }, 00:07:54.011 { 00:07:54.011 "name": "BaseBdev3", 00:07:54.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.011 "is_configured": false, 00:07:54.011 "data_offset": 0, 00:07:54.011 "data_size": 0 00:07:54.011 } 00:07:54.011 ] 00:07:54.011 }' 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.011 17:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.580 [2024-11-20 17:00:18.225943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.580 [2024-11-20 17:00:18.226001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.580 [2024-11-20 17:00:18.233983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.580 [2024-11-20 17:00:18.236530] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.580 [2024-11-20 17:00:18.236608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.580 [2024-11-20 17:00:18.236624] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:54.580 [2024-11-20 17:00:18.236639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.580 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.580 "name": "Existed_Raid", 00:07:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.580 "strip_size_kb": 64, 00:07:54.581 "state": "configuring", 00:07:54.581 "raid_level": "raid0", 00:07:54.581 "superblock": false, 00:07:54.581 "num_base_bdevs": 3, 00:07:54.581 "num_base_bdevs_discovered": 1, 00:07:54.581 "num_base_bdevs_operational": 3, 00:07:54.581 "base_bdevs_list": [ 00:07:54.581 { 00:07:54.581 "name": "BaseBdev1", 00:07:54.581 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:54.581 "is_configured": true, 00:07:54.581 "data_offset": 0, 00:07:54.581 "data_size": 65536 00:07:54.581 }, 00:07:54.581 { 00:07:54.581 "name": "BaseBdev2", 00:07:54.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.581 "is_configured": false, 00:07:54.581 "data_offset": 0, 00:07:54.581 "data_size": 0 00:07:54.581 }, 00:07:54.581 { 00:07:54.581 "name": "BaseBdev3", 00:07:54.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.581 "is_configured": false, 00:07:54.581 "data_offset": 0, 00:07:54.581 "data_size": 0 00:07:54.581 } 00:07:54.581 ] 00:07:54.581 }' 00:07:54.581 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.581 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.148 [2024-11-20 17:00:18.808788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.148 BaseBdev2 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.148 [ 00:07:55.148 { 00:07:55.148 "name": "BaseBdev2", 00:07:55.148 "aliases": [ 00:07:55.148 "fcfcf7be-4c10-4110-a8ae-69c725a8050e" 00:07:55.148 ], 00:07:55.148 "product_name": "Malloc disk", 00:07:55.148 "block_size": 512, 00:07:55.148 "num_blocks": 65536, 00:07:55.148 "uuid": "fcfcf7be-4c10-4110-a8ae-69c725a8050e", 00:07:55.148 "assigned_rate_limits": { 00:07:55.148 "rw_ios_per_sec": 0, 00:07:55.148 "rw_mbytes_per_sec": 0, 00:07:55.148 "r_mbytes_per_sec": 0, 00:07:55.148 "w_mbytes_per_sec": 0 00:07:55.148 }, 00:07:55.148 "claimed": true, 00:07:55.148 "claim_type": "exclusive_write", 00:07:55.148 "zoned": false, 00:07:55.148 "supported_io_types": { 00:07:55.148 "read": true, 00:07:55.148 "write": true, 00:07:55.148 "unmap": true, 00:07:55.148 "flush": true, 00:07:55.148 "reset": true, 00:07:55.148 "nvme_admin": false, 00:07:55.148 "nvme_io": false, 00:07:55.148 "nvme_io_md": false, 00:07:55.148 "write_zeroes": true, 00:07:55.148 "zcopy": true, 00:07:55.148 "get_zone_info": false, 00:07:55.148 "zone_management": false, 00:07:55.148 "zone_append": false, 00:07:55.148 "compare": false, 00:07:55.148 "compare_and_write": false, 00:07:55.148 "abort": true, 00:07:55.148 "seek_hole": false, 00:07:55.148 "seek_data": false, 00:07:55.148 "copy": true, 00:07:55.148 "nvme_iov_md": false 00:07:55.148 }, 00:07:55.148 "memory_domains": [ 00:07:55.148 { 00:07:55.148 "dma_device_id": "system", 00:07:55.148 "dma_device_type": 1 00:07:55.148 }, 00:07:55.148 { 00:07:55.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.148 "dma_device_type": 2 00:07:55.148 } 00:07:55.148 ], 00:07:55.148 "driver_specific": {} 00:07:55.148 } 00:07:55.148 ] 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.148 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.149 "name": "Existed_Raid", 00:07:55.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.149 "strip_size_kb": 64, 00:07:55.149 "state": "configuring", 00:07:55.149 "raid_level": "raid0", 00:07:55.149 "superblock": false, 00:07:55.149 "num_base_bdevs": 3, 00:07:55.149 "num_base_bdevs_discovered": 2, 00:07:55.149 "num_base_bdevs_operational": 3, 00:07:55.149 "base_bdevs_list": [ 00:07:55.149 { 00:07:55.149 "name": "BaseBdev1", 00:07:55.149 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:55.149 "is_configured": true, 00:07:55.149 "data_offset": 0, 00:07:55.149 "data_size": 65536 00:07:55.149 }, 00:07:55.149 { 00:07:55.149 "name": "BaseBdev2", 00:07:55.149 "uuid": "fcfcf7be-4c10-4110-a8ae-69c725a8050e", 00:07:55.149 "is_configured": true, 00:07:55.149 "data_offset": 0, 00:07:55.149 "data_size": 65536 00:07:55.149 }, 00:07:55.149 { 00:07:55.149 "name": "BaseBdev3", 00:07:55.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.149 "is_configured": false, 00:07:55.149 "data_offset": 0, 00:07:55.149 "data_size": 0 00:07:55.149 } 00:07:55.149 ] 00:07:55.149 }' 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.149 17:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 [2024-11-20 17:00:19.414854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:55.747 [2024-11-20 17:00:19.414920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.747 [2024-11-20 17:00:19.414941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:55.747 [2024-11-20 17:00:19.415341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.747 [2024-11-20 17:00:19.415563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.747 [2024-11-20 17:00:19.415591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:55.747 [2024-11-20 17:00:19.415968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.747 BaseBdev3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 [ 00:07:55.747 { 00:07:55.747 "name": "BaseBdev3", 00:07:55.747 "aliases": [ 00:07:55.747 "ebb454fa-0e6e-4342-ae59-f3f7a82c46b5" 00:07:55.747 ], 00:07:55.747 "product_name": "Malloc disk", 00:07:55.747 "block_size": 512, 00:07:55.747 "num_blocks": 65536, 00:07:55.747 "uuid": "ebb454fa-0e6e-4342-ae59-f3f7a82c46b5", 00:07:55.747 "assigned_rate_limits": { 00:07:55.747 "rw_ios_per_sec": 0, 00:07:55.747 "rw_mbytes_per_sec": 0, 00:07:55.747 "r_mbytes_per_sec": 0, 00:07:55.747 "w_mbytes_per_sec": 0 00:07:55.747 }, 00:07:55.747 "claimed": true, 00:07:55.747 "claim_type": "exclusive_write", 00:07:55.747 "zoned": false, 00:07:55.747 "supported_io_types": { 00:07:55.747 "read": true, 00:07:55.747 "write": true, 00:07:55.747 "unmap": true, 00:07:55.747 "flush": true, 00:07:55.747 "reset": true, 00:07:55.747 "nvme_admin": false, 00:07:55.747 "nvme_io": false, 00:07:55.747 "nvme_io_md": false, 00:07:55.747 "write_zeroes": true, 00:07:55.747 "zcopy": true, 00:07:55.747 "get_zone_info": false, 00:07:55.747 "zone_management": false, 00:07:55.747 "zone_append": false, 00:07:55.747 "compare": false, 00:07:55.747 "compare_and_write": false, 00:07:55.747 "abort": true, 00:07:55.747 "seek_hole": false, 00:07:55.747 "seek_data": false, 00:07:55.747 "copy": true, 00:07:55.747 "nvme_iov_md": false 00:07:55.747 }, 00:07:55.747 "memory_domains": [ 00:07:55.747 { 00:07:55.747 "dma_device_id": "system", 00:07:55.747 "dma_device_type": 1 00:07:55.747 }, 00:07:55.747 { 00:07:55.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.747 "dma_device_type": 2 00:07:55.747 } 00:07:55.747 ], 00:07:55.747 "driver_specific": {} 00:07:55.747 } 00:07:55.747 ] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.747 "name": "Existed_Raid", 00:07:55.747 "uuid": "e607f19e-ebe0-4a98-b340-9f680b3da9fc", 00:07:55.747 "strip_size_kb": 64, 00:07:55.747 "state": "online", 00:07:55.747 "raid_level": "raid0", 00:07:55.747 "superblock": false, 00:07:55.747 "num_base_bdevs": 3, 00:07:55.747 "num_base_bdevs_discovered": 3, 00:07:55.747 "num_base_bdevs_operational": 3, 00:07:55.747 "base_bdevs_list": [ 00:07:55.747 { 00:07:55.747 "name": "BaseBdev1", 00:07:55.747 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:55.747 "is_configured": true, 00:07:55.747 "data_offset": 0, 00:07:55.747 "data_size": 65536 00:07:55.747 }, 00:07:55.747 { 00:07:55.747 "name": "BaseBdev2", 00:07:55.747 "uuid": "fcfcf7be-4c10-4110-a8ae-69c725a8050e", 00:07:55.747 "is_configured": true, 00:07:55.747 "data_offset": 0, 00:07:55.747 "data_size": 65536 00:07:55.747 }, 00:07:55.747 { 00:07:55.747 "name": "BaseBdev3", 00:07:55.747 "uuid": "ebb454fa-0e6e-4342-ae59-f3f7a82c46b5", 00:07:55.747 "is_configured": true, 00:07:55.747 "data_offset": 0, 00:07:55.747 "data_size": 65536 00:07:55.747 } 00:07:55.747 ] 00:07:55.747 }' 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.747 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 17:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.316 [2024-11-20 17:00:19.987449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.316 "name": "Existed_Raid", 00:07:56.316 "aliases": [ 00:07:56.316 "e607f19e-ebe0-4a98-b340-9f680b3da9fc" 00:07:56.316 ], 00:07:56.316 "product_name": "Raid Volume", 00:07:56.316 "block_size": 512, 00:07:56.316 "num_blocks": 196608, 00:07:56.316 "uuid": "e607f19e-ebe0-4a98-b340-9f680b3da9fc", 00:07:56.316 "assigned_rate_limits": { 00:07:56.316 "rw_ios_per_sec": 0, 00:07:56.316 "rw_mbytes_per_sec": 0, 00:07:56.316 "r_mbytes_per_sec": 0, 00:07:56.316 "w_mbytes_per_sec": 0 00:07:56.316 }, 00:07:56.316 "claimed": false, 00:07:56.316 "zoned": false, 00:07:56.316 "supported_io_types": { 00:07:56.316 "read": true, 00:07:56.316 "write": true, 00:07:56.316 "unmap": true, 00:07:56.316 "flush": true, 00:07:56.316 "reset": true, 00:07:56.316 "nvme_admin": false, 00:07:56.316 "nvme_io": false, 00:07:56.316 "nvme_io_md": false, 00:07:56.316 "write_zeroes": true, 00:07:56.316 "zcopy": false, 00:07:56.316 "get_zone_info": false, 00:07:56.316 "zone_management": false, 00:07:56.316 "zone_append": false, 00:07:56.316 "compare": false, 00:07:56.316 "compare_and_write": false, 00:07:56.316 "abort": false, 00:07:56.316 "seek_hole": false, 00:07:56.316 "seek_data": false, 00:07:56.316 "copy": false, 00:07:56.316 "nvme_iov_md": false 00:07:56.316 }, 00:07:56.316 "memory_domains": [ 00:07:56.316 { 00:07:56.316 "dma_device_id": "system", 00:07:56.316 "dma_device_type": 1 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.316 "dma_device_type": 2 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "dma_device_id": "system", 00:07:56.316 "dma_device_type": 1 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.316 "dma_device_type": 2 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "dma_device_id": "system", 00:07:56.316 "dma_device_type": 1 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.316 "dma_device_type": 2 00:07:56.316 } 00:07:56.316 ], 00:07:56.316 "driver_specific": { 00:07:56.316 "raid": { 00:07:56.316 "uuid": "e607f19e-ebe0-4a98-b340-9f680b3da9fc", 00:07:56.316 "strip_size_kb": 64, 00:07:56.316 "state": "online", 00:07:56.316 "raid_level": "raid0", 00:07:56.316 "superblock": false, 00:07:56.316 "num_base_bdevs": 3, 00:07:56.316 "num_base_bdevs_discovered": 3, 00:07:56.316 "num_base_bdevs_operational": 3, 00:07:56.316 "base_bdevs_list": [ 00:07:56.316 { 00:07:56.316 "name": "BaseBdev1", 00:07:56.316 "uuid": "34c90b90-17ad-48c2-bebf-37d76db89cfa", 00:07:56.316 "is_configured": true, 00:07:56.316 "data_offset": 0, 00:07:56.316 "data_size": 65536 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "name": "BaseBdev2", 00:07:56.316 "uuid": "fcfcf7be-4c10-4110-a8ae-69c725a8050e", 00:07:56.316 "is_configured": true, 00:07:56.316 "data_offset": 0, 00:07:56.316 "data_size": 65536 00:07:56.316 }, 00:07:56.316 { 00:07:56.316 "name": "BaseBdev3", 00:07:56.316 "uuid": "ebb454fa-0e6e-4342-ae59-f3f7a82c46b5", 00:07:56.316 "is_configured": true, 00:07:56.316 "data_offset": 0, 00:07:56.316 "data_size": 65536 00:07:56.316 } 00:07:56.316 ] 00:07:56.316 } 00:07:56.316 } 00:07:56.316 }' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:56.316 BaseBdev2 00:07:56.316 BaseBdev3' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.316 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 [2024-11-20 17:00:20.303201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:56.576 [2024-11-20 17:00:20.303234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.576 [2024-11-20 17:00:20.303354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.576 "name": "Existed_Raid", 00:07:56.576 "uuid": "e607f19e-ebe0-4a98-b340-9f680b3da9fc", 00:07:56.576 "strip_size_kb": 64, 00:07:56.576 "state": "offline", 00:07:56.576 "raid_level": "raid0", 00:07:56.576 "superblock": false, 00:07:56.576 "num_base_bdevs": 3, 00:07:56.576 "num_base_bdevs_discovered": 2, 00:07:56.576 "num_base_bdevs_operational": 2, 00:07:56.576 "base_bdevs_list": [ 00:07:56.576 { 00:07:56.576 "name": null, 00:07:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.576 "is_configured": false, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 65536 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "name": "BaseBdev2", 00:07:56.576 "uuid": "fcfcf7be-4c10-4110-a8ae-69c725a8050e", 00:07:56.576 "is_configured": true, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 65536 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "name": "BaseBdev3", 00:07:56.576 "uuid": "ebb454fa-0e6e-4342-ae59-f3f7a82c46b5", 00:07:56.576 "is_configured": true, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 65536 00:07:56.576 } 00:07:56.576 ] 00:07:56.576 }' 00:07:56.576 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.577 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.145 17:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 [2024-11-20 17:00:20.973616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.404 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.405 [2024-11-20 17:00:21.107143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:57.405 [2024-11-20 17:00:21.107229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.405 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 BaseBdev2 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.664 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 [ 00:07:57.664 { 00:07:57.664 "name": "BaseBdev2", 00:07:57.664 "aliases": [ 00:07:57.664 "2cac4024-33b6-409e-98d7-6efed6be0852" 00:07:57.664 ], 00:07:57.664 "product_name": "Malloc disk", 00:07:57.664 "block_size": 512, 00:07:57.664 "num_blocks": 65536, 00:07:57.664 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:57.664 "assigned_rate_limits": { 00:07:57.664 "rw_ios_per_sec": 0, 00:07:57.664 "rw_mbytes_per_sec": 0, 00:07:57.664 "r_mbytes_per_sec": 0, 00:07:57.664 "w_mbytes_per_sec": 0 00:07:57.664 }, 00:07:57.664 "claimed": false, 00:07:57.664 "zoned": false, 00:07:57.664 "supported_io_types": { 00:07:57.664 "read": true, 00:07:57.664 "write": true, 00:07:57.664 "unmap": true, 00:07:57.664 "flush": true, 00:07:57.664 "reset": true, 00:07:57.664 "nvme_admin": false, 00:07:57.664 "nvme_io": false, 00:07:57.664 "nvme_io_md": false, 00:07:57.664 "write_zeroes": true, 00:07:57.664 "zcopy": true, 00:07:57.664 "get_zone_info": false, 00:07:57.664 "zone_management": false, 00:07:57.664 "zone_append": false, 00:07:57.664 "compare": false, 00:07:57.664 "compare_and_write": false, 00:07:57.664 "abort": true, 00:07:57.664 "seek_hole": false, 00:07:57.664 "seek_data": false, 00:07:57.664 "copy": true, 00:07:57.664 "nvme_iov_md": false 00:07:57.664 }, 00:07:57.664 "memory_domains": [ 00:07:57.664 { 00:07:57.665 "dma_device_id": "system", 00:07:57.665 "dma_device_type": 1 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.665 "dma_device_type": 2 00:07:57.665 } 00:07:57.665 ], 00:07:57.665 "driver_specific": {} 00:07:57.665 } 00:07:57.665 ] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 BaseBdev3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 [ 00:07:57.665 { 00:07:57.665 "name": "BaseBdev3", 00:07:57.665 "aliases": [ 00:07:57.665 "508cc573-f91e-473b-a4c1-df16dcf414d2" 00:07:57.665 ], 00:07:57.665 "product_name": "Malloc disk", 00:07:57.665 "block_size": 512, 00:07:57.665 "num_blocks": 65536, 00:07:57.665 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:57.665 "assigned_rate_limits": { 00:07:57.665 "rw_ios_per_sec": 0, 00:07:57.665 "rw_mbytes_per_sec": 0, 00:07:57.665 "r_mbytes_per_sec": 0, 00:07:57.665 "w_mbytes_per_sec": 0 00:07:57.665 }, 00:07:57.665 "claimed": false, 00:07:57.665 "zoned": false, 00:07:57.665 "supported_io_types": { 00:07:57.665 "read": true, 00:07:57.665 "write": true, 00:07:57.665 "unmap": true, 00:07:57.665 "flush": true, 00:07:57.665 "reset": true, 00:07:57.665 "nvme_admin": false, 00:07:57.665 "nvme_io": false, 00:07:57.665 "nvme_io_md": false, 00:07:57.665 "write_zeroes": true, 00:07:57.665 "zcopy": true, 00:07:57.665 "get_zone_info": false, 00:07:57.665 "zone_management": false, 00:07:57.665 "zone_append": false, 00:07:57.665 "compare": false, 00:07:57.665 "compare_and_write": false, 00:07:57.665 "abort": true, 00:07:57.665 "seek_hole": false, 00:07:57.665 "seek_data": false, 00:07:57.665 "copy": true, 00:07:57.665 "nvme_iov_md": false 00:07:57.665 }, 00:07:57.665 "memory_domains": [ 00:07:57.665 { 00:07:57.665 "dma_device_id": "system", 00:07:57.665 "dma_device_type": 1 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.665 "dma_device_type": 2 00:07:57.665 } 00:07:57.665 ], 00:07:57.665 "driver_specific": {} 00:07:57.665 } 00:07:57.665 ] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 [2024-11-20 17:00:21.392879] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.665 [2024-11-20 17:00:21.392948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.665 [2024-11-20 17:00:21.392999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.665 [2024-11-20 17:00:21.395429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.665 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.666 "name": "Existed_Raid", 00:07:57.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.666 "strip_size_kb": 64, 00:07:57.666 "state": "configuring", 00:07:57.666 "raid_level": "raid0", 00:07:57.666 "superblock": false, 00:07:57.666 "num_base_bdevs": 3, 00:07:57.666 "num_base_bdevs_discovered": 2, 00:07:57.666 "num_base_bdevs_operational": 3, 00:07:57.666 "base_bdevs_list": [ 00:07:57.666 { 00:07:57.666 "name": "BaseBdev1", 00:07:57.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.666 "is_configured": false, 00:07:57.666 "data_offset": 0, 00:07:57.666 "data_size": 0 00:07:57.666 }, 00:07:57.666 { 00:07:57.666 "name": "BaseBdev2", 00:07:57.666 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:57.666 "is_configured": true, 00:07:57.666 "data_offset": 0, 00:07:57.666 "data_size": 65536 00:07:57.666 }, 00:07:57.666 { 00:07:57.666 "name": "BaseBdev3", 00:07:57.666 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:57.666 "is_configured": true, 00:07:57.666 "data_offset": 0, 00:07:57.666 "data_size": 65536 00:07:57.666 } 00:07:57.666 ] 00:07:57.666 }' 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.666 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 [2024-11-20 17:00:21.897062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.233 "name": "Existed_Raid", 00:07:58.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.233 "strip_size_kb": 64, 00:07:58.233 "state": "configuring", 00:07:58.233 "raid_level": "raid0", 00:07:58.233 "superblock": false, 00:07:58.233 "num_base_bdevs": 3, 00:07:58.233 "num_base_bdevs_discovered": 1, 00:07:58.233 "num_base_bdevs_operational": 3, 00:07:58.233 "base_bdevs_list": [ 00:07:58.233 { 00:07:58.233 "name": "BaseBdev1", 00:07:58.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.233 "is_configured": false, 00:07:58.233 "data_offset": 0, 00:07:58.233 "data_size": 0 00:07:58.233 }, 00:07:58.233 { 00:07:58.233 "name": null, 00:07:58.233 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:58.233 "is_configured": false, 00:07:58.233 "data_offset": 0, 00:07:58.233 "data_size": 65536 00:07:58.233 }, 00:07:58.233 { 00:07:58.233 "name": "BaseBdev3", 00:07:58.233 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:58.233 "is_configured": true, 00:07:58.233 "data_offset": 0, 00:07:58.233 "data_size": 65536 00:07:58.233 } 00:07:58.233 ] 00:07:58.233 }' 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.233 17:00:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 [2024-11-20 17:00:22.526561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.802 BaseBdev1 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.802 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 [ 00:07:58.802 { 00:07:58.802 "name": "BaseBdev1", 00:07:58.802 "aliases": [ 00:07:58.802 "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9" 00:07:58.802 ], 00:07:58.802 "product_name": "Malloc disk", 00:07:58.802 "block_size": 512, 00:07:58.802 "num_blocks": 65536, 00:07:58.802 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:07:58.802 "assigned_rate_limits": { 00:07:58.802 "rw_ios_per_sec": 0, 00:07:58.802 "rw_mbytes_per_sec": 0, 00:07:58.802 "r_mbytes_per_sec": 0, 00:07:58.802 "w_mbytes_per_sec": 0 00:07:58.802 }, 00:07:58.802 "claimed": true, 00:07:58.802 "claim_type": "exclusive_write", 00:07:58.802 "zoned": false, 00:07:58.803 "supported_io_types": { 00:07:58.803 "read": true, 00:07:58.803 "write": true, 00:07:58.803 "unmap": true, 00:07:58.803 "flush": true, 00:07:58.803 "reset": true, 00:07:58.803 "nvme_admin": false, 00:07:58.803 "nvme_io": false, 00:07:58.803 "nvme_io_md": false, 00:07:58.803 "write_zeroes": true, 00:07:58.803 "zcopy": true, 00:07:58.803 "get_zone_info": false, 00:07:58.803 "zone_management": false, 00:07:58.803 "zone_append": false, 00:07:58.803 "compare": false, 00:07:58.803 "compare_and_write": false, 00:07:58.803 "abort": true, 00:07:58.803 "seek_hole": false, 00:07:58.803 "seek_data": false, 00:07:58.803 "copy": true, 00:07:58.803 "nvme_iov_md": false 00:07:58.803 }, 00:07:58.803 "memory_domains": [ 00:07:58.803 { 00:07:58.803 "dma_device_id": "system", 00:07:58.803 "dma_device_type": 1 00:07:58.803 }, 00:07:58.803 { 00:07:58.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.803 "dma_device_type": 2 00:07:58.803 } 00:07:58.803 ], 00:07:58.803 "driver_specific": {} 00:07:58.803 } 00:07:58.803 ] 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.803 "name": "Existed_Raid", 00:07:58.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.803 "strip_size_kb": 64, 00:07:58.803 "state": "configuring", 00:07:58.803 "raid_level": "raid0", 00:07:58.803 "superblock": false, 00:07:58.803 "num_base_bdevs": 3, 00:07:58.803 "num_base_bdevs_discovered": 2, 00:07:58.803 "num_base_bdevs_operational": 3, 00:07:58.803 "base_bdevs_list": [ 00:07:58.803 { 00:07:58.803 "name": "BaseBdev1", 00:07:58.803 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:07:58.803 "is_configured": true, 00:07:58.803 "data_offset": 0, 00:07:58.803 "data_size": 65536 00:07:58.803 }, 00:07:58.803 { 00:07:58.803 "name": null, 00:07:58.803 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:58.803 "is_configured": false, 00:07:58.803 "data_offset": 0, 00:07:58.803 "data_size": 65536 00:07:58.803 }, 00:07:58.803 { 00:07:58.803 "name": "BaseBdev3", 00:07:58.803 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:58.803 "is_configured": true, 00:07:58.803 "data_offset": 0, 00:07:58.803 "data_size": 65536 00:07:58.803 } 00:07:58.803 ] 00:07:58.803 }' 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.803 17:00:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.371 [2024-11-20 17:00:23.102849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.371 "name": "Existed_Raid", 00:07:59.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.371 "strip_size_kb": 64, 00:07:59.371 "state": "configuring", 00:07:59.371 "raid_level": "raid0", 00:07:59.371 "superblock": false, 00:07:59.371 "num_base_bdevs": 3, 00:07:59.371 "num_base_bdevs_discovered": 1, 00:07:59.371 "num_base_bdevs_operational": 3, 00:07:59.371 "base_bdevs_list": [ 00:07:59.371 { 00:07:59.371 "name": "BaseBdev1", 00:07:59.371 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:07:59.371 "is_configured": true, 00:07:59.371 "data_offset": 0, 00:07:59.371 "data_size": 65536 00:07:59.371 }, 00:07:59.371 { 00:07:59.371 "name": null, 00:07:59.371 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:59.371 "is_configured": false, 00:07:59.371 "data_offset": 0, 00:07:59.371 "data_size": 65536 00:07:59.371 }, 00:07:59.371 { 00:07:59.371 "name": null, 00:07:59.371 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:59.371 "is_configured": false, 00:07:59.371 "data_offset": 0, 00:07:59.371 "data_size": 65536 00:07:59.371 } 00:07:59.371 ] 00:07:59.371 }' 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.371 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.938 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.939 [2024-11-20 17:00:23.659183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.939 "name": "Existed_Raid", 00:07:59.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.939 "strip_size_kb": 64, 00:07:59.939 "state": "configuring", 00:07:59.939 "raid_level": "raid0", 00:07:59.939 "superblock": false, 00:07:59.939 "num_base_bdevs": 3, 00:07:59.939 "num_base_bdevs_discovered": 2, 00:07:59.939 "num_base_bdevs_operational": 3, 00:07:59.939 "base_bdevs_list": [ 00:07:59.939 { 00:07:59.939 "name": "BaseBdev1", 00:07:59.939 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:07:59.939 "is_configured": true, 00:07:59.939 "data_offset": 0, 00:07:59.939 "data_size": 65536 00:07:59.939 }, 00:07:59.939 { 00:07:59.939 "name": null, 00:07:59.939 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:07:59.939 "is_configured": false, 00:07:59.939 "data_offset": 0, 00:07:59.939 "data_size": 65536 00:07:59.939 }, 00:07:59.939 { 00:07:59.939 "name": "BaseBdev3", 00:07:59.939 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:07:59.939 "is_configured": true, 00:07:59.939 "data_offset": 0, 00:07:59.939 "data_size": 65536 00:07:59.939 } 00:07:59.939 ] 00:07:59.939 }' 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.939 17:00:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.506 [2024-11-20 17:00:24.227394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.506 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.765 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.765 "name": "Existed_Raid", 00:08:00.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.765 "strip_size_kb": 64, 00:08:00.765 "state": "configuring", 00:08:00.765 "raid_level": "raid0", 00:08:00.765 "superblock": false, 00:08:00.765 "num_base_bdevs": 3, 00:08:00.765 "num_base_bdevs_discovered": 1, 00:08:00.765 "num_base_bdevs_operational": 3, 00:08:00.765 "base_bdevs_list": [ 00:08:00.765 { 00:08:00.765 "name": null, 00:08:00.765 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:08:00.765 "is_configured": false, 00:08:00.765 "data_offset": 0, 00:08:00.765 "data_size": 65536 00:08:00.765 }, 00:08:00.765 { 00:08:00.765 "name": null, 00:08:00.765 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:08:00.765 "is_configured": false, 00:08:00.765 "data_offset": 0, 00:08:00.765 "data_size": 65536 00:08:00.765 }, 00:08:00.765 { 00:08:00.765 "name": "BaseBdev3", 00:08:00.765 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:08:00.765 "is_configured": true, 00:08:00.765 "data_offset": 0, 00:08:00.765 "data_size": 65536 00:08:00.765 } 00:08:00.765 ] 00:08:00.765 }' 00:08:00.765 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.765 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.024 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.024 [2024-11-20 17:00:24.885853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.283 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.284 "name": "Existed_Raid", 00:08:01.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.284 "strip_size_kb": 64, 00:08:01.284 "state": "configuring", 00:08:01.284 "raid_level": "raid0", 00:08:01.284 "superblock": false, 00:08:01.284 "num_base_bdevs": 3, 00:08:01.284 "num_base_bdevs_discovered": 2, 00:08:01.284 "num_base_bdevs_operational": 3, 00:08:01.284 "base_bdevs_list": [ 00:08:01.284 { 00:08:01.284 "name": null, 00:08:01.284 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:08:01.284 "is_configured": false, 00:08:01.284 "data_offset": 0, 00:08:01.284 "data_size": 65536 00:08:01.284 }, 00:08:01.284 { 00:08:01.284 "name": "BaseBdev2", 00:08:01.284 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:08:01.284 "is_configured": true, 00:08:01.284 "data_offset": 0, 00:08:01.284 "data_size": 65536 00:08:01.284 }, 00:08:01.284 { 00:08:01.284 "name": "BaseBdev3", 00:08:01.284 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:08:01.284 "is_configured": true, 00:08:01.284 "data_offset": 0, 00:08:01.284 "data_size": 65536 00:08:01.284 } 00:08:01.284 ] 00:08:01.284 }' 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.284 17:00:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.543 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.543 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.543 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.543 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.802 [2024-11-20 17:00:25.544114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:01.802 [2024-11-20 17:00:25.544230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:01.802 [2024-11-20 17:00:25.544245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:01.802 [2024-11-20 17:00:25.544635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.802 [2024-11-20 17:00:25.544855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:01.802 [2024-11-20 17:00:25.544880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:01.802 [2024-11-20 17:00:25.545162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.802 NewBaseBdev 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.802 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.802 [ 00:08:01.802 { 00:08:01.802 "name": "NewBaseBdev", 00:08:01.802 "aliases": [ 00:08:01.802 "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9" 00:08:01.802 ], 00:08:01.802 "product_name": "Malloc disk", 00:08:01.802 "block_size": 512, 00:08:01.802 "num_blocks": 65536, 00:08:01.802 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:08:01.802 "assigned_rate_limits": { 00:08:01.802 "rw_ios_per_sec": 0, 00:08:01.802 "rw_mbytes_per_sec": 0, 00:08:01.802 "r_mbytes_per_sec": 0, 00:08:01.802 "w_mbytes_per_sec": 0 00:08:01.802 }, 00:08:01.802 "claimed": true, 00:08:01.802 "claim_type": "exclusive_write", 00:08:01.802 "zoned": false, 00:08:01.802 "supported_io_types": { 00:08:01.802 "read": true, 00:08:01.802 "write": true, 00:08:01.802 "unmap": true, 00:08:01.802 "flush": true, 00:08:01.802 "reset": true, 00:08:01.802 "nvme_admin": false, 00:08:01.802 "nvme_io": false, 00:08:01.802 "nvme_io_md": false, 00:08:01.802 "write_zeroes": true, 00:08:01.802 "zcopy": true, 00:08:01.802 "get_zone_info": false, 00:08:01.802 "zone_management": false, 00:08:01.802 "zone_append": false, 00:08:01.802 "compare": false, 00:08:01.802 "compare_and_write": false, 00:08:01.802 "abort": true, 00:08:01.802 "seek_hole": false, 00:08:01.802 "seek_data": false, 00:08:01.802 "copy": true, 00:08:01.802 "nvme_iov_md": false 00:08:01.802 }, 00:08:01.802 "memory_domains": [ 00:08:01.802 { 00:08:01.802 "dma_device_id": "system", 00:08:01.802 "dma_device_type": 1 00:08:01.802 }, 00:08:01.802 { 00:08:01.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.802 "dma_device_type": 2 00:08:01.802 } 00:08:01.802 ], 00:08:01.802 "driver_specific": {} 00:08:01.802 } 00:08:01.802 ] 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.803 "name": "Existed_Raid", 00:08:01.803 "uuid": "1c15189a-e153-4726-93d4-96ea8b82b799", 00:08:01.803 "strip_size_kb": 64, 00:08:01.803 "state": "online", 00:08:01.803 "raid_level": "raid0", 00:08:01.803 "superblock": false, 00:08:01.803 "num_base_bdevs": 3, 00:08:01.803 "num_base_bdevs_discovered": 3, 00:08:01.803 "num_base_bdevs_operational": 3, 00:08:01.803 "base_bdevs_list": [ 00:08:01.803 { 00:08:01.803 "name": "NewBaseBdev", 00:08:01.803 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:08:01.803 "is_configured": true, 00:08:01.803 "data_offset": 0, 00:08:01.803 "data_size": 65536 00:08:01.803 }, 00:08:01.803 { 00:08:01.803 "name": "BaseBdev2", 00:08:01.803 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:08:01.803 "is_configured": true, 00:08:01.803 "data_offset": 0, 00:08:01.803 "data_size": 65536 00:08:01.803 }, 00:08:01.803 { 00:08:01.803 "name": "BaseBdev3", 00:08:01.803 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:08:01.803 "is_configured": true, 00:08:01.803 "data_offset": 0, 00:08:01.803 "data_size": 65536 00:08:01.803 } 00:08:01.803 ] 00:08:01.803 }' 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.803 17:00:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 [2024-11-20 17:00:26.092867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.371 "name": "Existed_Raid", 00:08:02.371 "aliases": [ 00:08:02.371 "1c15189a-e153-4726-93d4-96ea8b82b799" 00:08:02.371 ], 00:08:02.371 "product_name": "Raid Volume", 00:08:02.371 "block_size": 512, 00:08:02.371 "num_blocks": 196608, 00:08:02.371 "uuid": "1c15189a-e153-4726-93d4-96ea8b82b799", 00:08:02.371 "assigned_rate_limits": { 00:08:02.371 "rw_ios_per_sec": 0, 00:08:02.371 "rw_mbytes_per_sec": 0, 00:08:02.371 "r_mbytes_per_sec": 0, 00:08:02.371 "w_mbytes_per_sec": 0 00:08:02.371 }, 00:08:02.371 "claimed": false, 00:08:02.371 "zoned": false, 00:08:02.371 "supported_io_types": { 00:08:02.371 "read": true, 00:08:02.371 "write": true, 00:08:02.371 "unmap": true, 00:08:02.371 "flush": true, 00:08:02.371 "reset": true, 00:08:02.371 "nvme_admin": false, 00:08:02.371 "nvme_io": false, 00:08:02.371 "nvme_io_md": false, 00:08:02.371 "write_zeroes": true, 00:08:02.371 "zcopy": false, 00:08:02.371 "get_zone_info": false, 00:08:02.371 "zone_management": false, 00:08:02.371 "zone_append": false, 00:08:02.371 "compare": false, 00:08:02.371 "compare_and_write": false, 00:08:02.371 "abort": false, 00:08:02.371 "seek_hole": false, 00:08:02.371 "seek_data": false, 00:08:02.371 "copy": false, 00:08:02.371 "nvme_iov_md": false 00:08:02.371 }, 00:08:02.371 "memory_domains": [ 00:08:02.371 { 00:08:02.371 "dma_device_id": "system", 00:08:02.371 "dma_device_type": 1 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.371 "dma_device_type": 2 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "dma_device_id": "system", 00:08:02.371 "dma_device_type": 1 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.371 "dma_device_type": 2 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "dma_device_id": "system", 00:08:02.371 "dma_device_type": 1 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.371 "dma_device_type": 2 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "driver_specific": { 00:08:02.371 "raid": { 00:08:02.371 "uuid": "1c15189a-e153-4726-93d4-96ea8b82b799", 00:08:02.371 "strip_size_kb": 64, 00:08:02.371 "state": "online", 00:08:02.371 "raid_level": "raid0", 00:08:02.371 "superblock": false, 00:08:02.371 "num_base_bdevs": 3, 00:08:02.371 "num_base_bdevs_discovered": 3, 00:08:02.371 "num_base_bdevs_operational": 3, 00:08:02.371 "base_bdevs_list": [ 00:08:02.371 { 00:08:02.371 "name": "NewBaseBdev", 00:08:02.371 "uuid": "5a81a8bb-8175-4f86-b5ba-c04e06aaf5d9", 00:08:02.371 "is_configured": true, 00:08:02.371 "data_offset": 0, 00:08:02.371 "data_size": 65536 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "name": "BaseBdev2", 00:08:02.371 "uuid": "2cac4024-33b6-409e-98d7-6efed6be0852", 00:08:02.371 "is_configured": true, 00:08:02.371 "data_offset": 0, 00:08:02.371 "data_size": 65536 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "name": "BaseBdev3", 00:08:02.371 "uuid": "508cc573-f91e-473b-a4c1-df16dcf414d2", 00:08:02.371 "is_configured": true, 00:08:02.371 "data_offset": 0, 00:08:02.371 "data_size": 65536 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 } 00:08:02.371 } 00:08:02.371 }' 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:02.371 BaseBdev2 00:08:02.371 BaseBdev3' 00:08:02.371 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.631 [2024-11-20 17:00:26.404526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.631 [2024-11-20 17:00:26.404558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.631 [2024-11-20 17:00:26.404657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.631 [2024-11-20 17:00:26.404722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.631 [2024-11-20 17:00:26.404756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63608 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63608 ']' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63608 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63608 00:08:02.631 killing process with pid 63608 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63608' 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63608 00:08:02.631 [2024-11-20 17:00:26.442679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.631 17:00:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63608 00:08:02.891 [2024-11-20 17:00:26.730217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.270 00:08:04.270 real 0m11.835s 00:08:04.270 user 0m19.601s 00:08:04.270 sys 0m1.618s 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.270 ************************************ 00:08:04.270 END TEST raid_state_function_test 00:08:04.270 ************************************ 00:08:04.270 17:00:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:04.270 17:00:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.270 17:00:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.270 17:00:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.270 ************************************ 00:08:04.270 START TEST raid_state_function_test_sb 00:08:04.270 ************************************ 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64246 00:08:04.270 Process raid pid: 64246 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64246' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64246 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64246 ']' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.270 17:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.270 [2024-11-20 17:00:28.014783] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:04.270 [2024-11-20 17:00:28.014983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.529 [2024-11-20 17:00:28.200080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.529 [2024-11-20 17:00:28.339413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.788 [2024-11-20 17:00:28.565017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.788 [2024-11-20 17:00:28.565060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.356 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.356 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:05.356 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.356 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.356 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.356 [2024-11-20 17:00:28.989434] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.357 [2024-11-20 17:00:28.989556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.357 [2024-11-20 17:00:28.989572] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.357 [2024-11-20 17:00:28.989588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.357 [2024-11-20 17:00:28.989598] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.357 [2024-11-20 17:00:28.989612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.357 17:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.357 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.357 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.357 "name": "Existed_Raid", 00:08:05.357 "uuid": "4cc52a8c-24ca-4968-a25d-c4918e4195eb", 00:08:05.357 "strip_size_kb": 64, 00:08:05.357 "state": "configuring", 00:08:05.357 "raid_level": "raid0", 00:08:05.357 "superblock": true, 00:08:05.357 "num_base_bdevs": 3, 00:08:05.357 "num_base_bdevs_discovered": 0, 00:08:05.357 "num_base_bdevs_operational": 3, 00:08:05.357 "base_bdevs_list": [ 00:08:05.357 { 00:08:05.357 "name": "BaseBdev1", 00:08:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.357 "is_configured": false, 00:08:05.357 "data_offset": 0, 00:08:05.357 "data_size": 0 00:08:05.357 }, 00:08:05.357 { 00:08:05.357 "name": "BaseBdev2", 00:08:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.357 "is_configured": false, 00:08:05.357 "data_offset": 0, 00:08:05.357 "data_size": 0 00:08:05.357 }, 00:08:05.357 { 00:08:05.357 "name": "BaseBdev3", 00:08:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.357 "is_configured": false, 00:08:05.357 "data_offset": 0, 00:08:05.357 "data_size": 0 00:08:05.357 } 00:08:05.357 ] 00:08:05.357 }' 00:08:05.357 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.357 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 [2024-11-20 17:00:29.509516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.924 [2024-11-20 17:00:29.509591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 [2024-11-20 17:00:29.517485] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.924 [2024-11-20 17:00:29.517544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.924 [2024-11-20 17:00:29.517559] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.924 [2024-11-20 17:00:29.517575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.924 [2024-11-20 17:00:29.517584] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.924 [2024-11-20 17:00:29.517598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 [2024-11-20 17:00:29.565348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.924 BaseBdev1 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.924 [ 00:08:05.924 { 00:08:05.924 "name": "BaseBdev1", 00:08:05.924 "aliases": [ 00:08:05.924 "76b3307e-8946-4581-8910-c5ef684531a1" 00:08:05.924 ], 00:08:05.924 "product_name": "Malloc disk", 00:08:05.924 "block_size": 512, 00:08:05.924 "num_blocks": 65536, 00:08:05.924 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:05.924 "assigned_rate_limits": { 00:08:05.924 "rw_ios_per_sec": 0, 00:08:05.924 "rw_mbytes_per_sec": 0, 00:08:05.924 "r_mbytes_per_sec": 0, 00:08:05.924 "w_mbytes_per_sec": 0 00:08:05.924 }, 00:08:05.924 "claimed": true, 00:08:05.924 "claim_type": "exclusive_write", 00:08:05.924 "zoned": false, 00:08:05.924 "supported_io_types": { 00:08:05.924 "read": true, 00:08:05.924 "write": true, 00:08:05.924 "unmap": true, 00:08:05.924 "flush": true, 00:08:05.924 "reset": true, 00:08:05.924 "nvme_admin": false, 00:08:05.924 "nvme_io": false, 00:08:05.924 "nvme_io_md": false, 00:08:05.924 "write_zeroes": true, 00:08:05.924 "zcopy": true, 00:08:05.924 "get_zone_info": false, 00:08:05.924 "zone_management": false, 00:08:05.924 "zone_append": false, 00:08:05.924 "compare": false, 00:08:05.924 "compare_and_write": false, 00:08:05.924 "abort": true, 00:08:05.924 "seek_hole": false, 00:08:05.924 "seek_data": false, 00:08:05.924 "copy": true, 00:08:05.924 "nvme_iov_md": false 00:08:05.924 }, 00:08:05.924 "memory_domains": [ 00:08:05.924 { 00:08:05.924 "dma_device_id": "system", 00:08:05.924 "dma_device_type": 1 00:08:05.924 }, 00:08:05.924 { 00:08:05.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.924 "dma_device_type": 2 00:08:05.924 } 00:08:05.924 ], 00:08:05.924 "driver_specific": {} 00:08:05.924 } 00:08:05.924 ] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.924 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.925 "name": "Existed_Raid", 00:08:05.925 "uuid": "20b80560-b071-45d7-a4c7-97d8962a5baa", 00:08:05.925 "strip_size_kb": 64, 00:08:05.925 "state": "configuring", 00:08:05.925 "raid_level": "raid0", 00:08:05.925 "superblock": true, 00:08:05.925 "num_base_bdevs": 3, 00:08:05.925 "num_base_bdevs_discovered": 1, 00:08:05.925 "num_base_bdevs_operational": 3, 00:08:05.925 "base_bdevs_list": [ 00:08:05.925 { 00:08:05.925 "name": "BaseBdev1", 00:08:05.925 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:05.925 "is_configured": true, 00:08:05.925 "data_offset": 2048, 00:08:05.925 "data_size": 63488 00:08:05.925 }, 00:08:05.925 { 00:08:05.925 "name": "BaseBdev2", 00:08:05.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.925 "is_configured": false, 00:08:05.925 "data_offset": 0, 00:08:05.925 "data_size": 0 00:08:05.925 }, 00:08:05.925 { 00:08:05.925 "name": "BaseBdev3", 00:08:05.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.925 "is_configured": false, 00:08:05.925 "data_offset": 0, 00:08:05.925 "data_size": 0 00:08:05.925 } 00:08:05.925 ] 00:08:05.925 }' 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.925 17:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.492 [2024-11-20 17:00:30.129797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.492 [2024-11-20 17:00:30.129867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.492 [2024-11-20 17:00:30.137864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.492 [2024-11-20 17:00:30.140353] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.492 [2024-11-20 17:00:30.140416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.492 [2024-11-20 17:00:30.140449] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.492 [2024-11-20 17:00:30.140463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.492 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.492 "name": "Existed_Raid", 00:08:06.492 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:06.492 "strip_size_kb": 64, 00:08:06.492 "state": "configuring", 00:08:06.492 "raid_level": "raid0", 00:08:06.492 "superblock": true, 00:08:06.492 "num_base_bdevs": 3, 00:08:06.492 "num_base_bdevs_discovered": 1, 00:08:06.492 "num_base_bdevs_operational": 3, 00:08:06.492 "base_bdevs_list": [ 00:08:06.492 { 00:08:06.492 "name": "BaseBdev1", 00:08:06.492 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:06.492 "is_configured": true, 00:08:06.492 "data_offset": 2048, 00:08:06.492 "data_size": 63488 00:08:06.492 }, 00:08:06.492 { 00:08:06.492 "name": "BaseBdev2", 00:08:06.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.492 "is_configured": false, 00:08:06.492 "data_offset": 0, 00:08:06.492 "data_size": 0 00:08:06.492 }, 00:08:06.492 { 00:08:06.492 "name": "BaseBdev3", 00:08:06.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.492 "is_configured": false, 00:08:06.492 "data_offset": 0, 00:08:06.492 "data_size": 0 00:08:06.492 } 00:08:06.492 ] 00:08:06.492 }' 00:08:06.493 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.493 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 [2024-11-20 17:00:30.704694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.060 BaseBdev2 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 [ 00:08:07.060 { 00:08:07.060 "name": "BaseBdev2", 00:08:07.060 "aliases": [ 00:08:07.060 "d09321ab-b626-4ebf-b994-dc27384e9dfd" 00:08:07.060 ], 00:08:07.060 "product_name": "Malloc disk", 00:08:07.060 "block_size": 512, 00:08:07.060 "num_blocks": 65536, 00:08:07.060 "uuid": "d09321ab-b626-4ebf-b994-dc27384e9dfd", 00:08:07.060 "assigned_rate_limits": { 00:08:07.060 "rw_ios_per_sec": 0, 00:08:07.060 "rw_mbytes_per_sec": 0, 00:08:07.060 "r_mbytes_per_sec": 0, 00:08:07.060 "w_mbytes_per_sec": 0 00:08:07.060 }, 00:08:07.060 "claimed": true, 00:08:07.060 "claim_type": "exclusive_write", 00:08:07.060 "zoned": false, 00:08:07.060 "supported_io_types": { 00:08:07.060 "read": true, 00:08:07.060 "write": true, 00:08:07.060 "unmap": true, 00:08:07.060 "flush": true, 00:08:07.060 "reset": true, 00:08:07.060 "nvme_admin": false, 00:08:07.060 "nvme_io": false, 00:08:07.060 "nvme_io_md": false, 00:08:07.060 "write_zeroes": true, 00:08:07.060 "zcopy": true, 00:08:07.060 "get_zone_info": false, 00:08:07.060 "zone_management": false, 00:08:07.060 "zone_append": false, 00:08:07.060 "compare": false, 00:08:07.060 "compare_and_write": false, 00:08:07.060 "abort": true, 00:08:07.060 "seek_hole": false, 00:08:07.060 "seek_data": false, 00:08:07.060 "copy": true, 00:08:07.060 "nvme_iov_md": false 00:08:07.060 }, 00:08:07.060 "memory_domains": [ 00:08:07.060 { 00:08:07.060 "dma_device_id": "system", 00:08:07.060 "dma_device_type": 1 00:08:07.060 }, 00:08:07.060 { 00:08:07.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.060 "dma_device_type": 2 00:08:07.060 } 00:08:07.060 ], 00:08:07.060 "driver_specific": {} 00:08:07.060 } 00:08:07.060 ] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.060 "name": "Existed_Raid", 00:08:07.060 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:07.060 "strip_size_kb": 64, 00:08:07.060 "state": "configuring", 00:08:07.060 "raid_level": "raid0", 00:08:07.060 "superblock": true, 00:08:07.060 "num_base_bdevs": 3, 00:08:07.060 "num_base_bdevs_discovered": 2, 00:08:07.060 "num_base_bdevs_operational": 3, 00:08:07.060 "base_bdevs_list": [ 00:08:07.060 { 00:08:07.060 "name": "BaseBdev1", 00:08:07.060 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:07.060 "is_configured": true, 00:08:07.060 "data_offset": 2048, 00:08:07.060 "data_size": 63488 00:08:07.060 }, 00:08:07.060 { 00:08:07.060 "name": "BaseBdev2", 00:08:07.060 "uuid": "d09321ab-b626-4ebf-b994-dc27384e9dfd", 00:08:07.060 "is_configured": true, 00:08:07.060 "data_offset": 2048, 00:08:07.060 "data_size": 63488 00:08:07.060 }, 00:08:07.060 { 00:08:07.060 "name": "BaseBdev3", 00:08:07.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.060 "is_configured": false, 00:08:07.060 "data_offset": 0, 00:08:07.060 "data_size": 0 00:08:07.060 } 00:08:07.060 ] 00:08:07.060 }' 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.060 17:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.627 [2024-11-20 17:00:31.303282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.627 [2024-11-20 17:00:31.303615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.627 [2024-11-20 17:00:31.303644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.627 BaseBdev3 00:08:07.627 [2024-11-20 17:00:31.303992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:07.627 [2024-11-20 17:00:31.304197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.627 [2024-11-20 17:00:31.304214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.627 [2024-11-20 17:00:31.304387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:07.627 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.628 [ 00:08:07.628 { 00:08:07.628 "name": "BaseBdev3", 00:08:07.628 "aliases": [ 00:08:07.628 "e889b553-79e4-41de-99dc-ff74719b4980" 00:08:07.628 ], 00:08:07.628 "product_name": "Malloc disk", 00:08:07.628 "block_size": 512, 00:08:07.628 "num_blocks": 65536, 00:08:07.628 "uuid": "e889b553-79e4-41de-99dc-ff74719b4980", 00:08:07.628 "assigned_rate_limits": { 00:08:07.628 "rw_ios_per_sec": 0, 00:08:07.628 "rw_mbytes_per_sec": 0, 00:08:07.628 "r_mbytes_per_sec": 0, 00:08:07.628 "w_mbytes_per_sec": 0 00:08:07.628 }, 00:08:07.628 "claimed": true, 00:08:07.628 "claim_type": "exclusive_write", 00:08:07.628 "zoned": false, 00:08:07.628 "supported_io_types": { 00:08:07.628 "read": true, 00:08:07.628 "write": true, 00:08:07.628 "unmap": true, 00:08:07.628 "flush": true, 00:08:07.628 "reset": true, 00:08:07.628 "nvme_admin": false, 00:08:07.628 "nvme_io": false, 00:08:07.628 "nvme_io_md": false, 00:08:07.628 "write_zeroes": true, 00:08:07.628 "zcopy": true, 00:08:07.628 "get_zone_info": false, 00:08:07.628 "zone_management": false, 00:08:07.628 "zone_append": false, 00:08:07.628 "compare": false, 00:08:07.628 "compare_and_write": false, 00:08:07.628 "abort": true, 00:08:07.628 "seek_hole": false, 00:08:07.628 "seek_data": false, 00:08:07.628 "copy": true, 00:08:07.628 "nvme_iov_md": false 00:08:07.628 }, 00:08:07.628 "memory_domains": [ 00:08:07.628 { 00:08:07.628 "dma_device_id": "system", 00:08:07.628 "dma_device_type": 1 00:08:07.628 }, 00:08:07.628 { 00:08:07.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.628 "dma_device_type": 2 00:08:07.628 } 00:08:07.628 ], 00:08:07.628 "driver_specific": {} 00:08:07.628 } 00:08:07.628 ] 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.628 "name": "Existed_Raid", 00:08:07.628 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:07.628 "strip_size_kb": 64, 00:08:07.628 "state": "online", 00:08:07.628 "raid_level": "raid0", 00:08:07.628 "superblock": true, 00:08:07.628 "num_base_bdevs": 3, 00:08:07.628 "num_base_bdevs_discovered": 3, 00:08:07.628 "num_base_bdevs_operational": 3, 00:08:07.628 "base_bdevs_list": [ 00:08:07.628 { 00:08:07.628 "name": "BaseBdev1", 00:08:07.628 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:07.628 "is_configured": true, 00:08:07.628 "data_offset": 2048, 00:08:07.628 "data_size": 63488 00:08:07.628 }, 00:08:07.628 { 00:08:07.628 "name": "BaseBdev2", 00:08:07.628 "uuid": "d09321ab-b626-4ebf-b994-dc27384e9dfd", 00:08:07.628 "is_configured": true, 00:08:07.628 "data_offset": 2048, 00:08:07.628 "data_size": 63488 00:08:07.628 }, 00:08:07.628 { 00:08:07.628 "name": "BaseBdev3", 00:08:07.628 "uuid": "e889b553-79e4-41de-99dc-ff74719b4980", 00:08:07.628 "is_configured": true, 00:08:07.628 "data_offset": 2048, 00:08:07.628 "data_size": 63488 00:08:07.628 } 00:08:07.628 ] 00:08:07.628 }' 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.628 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.195 [2024-11-20 17:00:31.863986] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.195 "name": "Existed_Raid", 00:08:08.195 "aliases": [ 00:08:08.195 "87bb3357-2f8d-422d-8911-c4fea3a76ed3" 00:08:08.195 ], 00:08:08.195 "product_name": "Raid Volume", 00:08:08.195 "block_size": 512, 00:08:08.195 "num_blocks": 190464, 00:08:08.195 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:08.195 "assigned_rate_limits": { 00:08:08.195 "rw_ios_per_sec": 0, 00:08:08.195 "rw_mbytes_per_sec": 0, 00:08:08.195 "r_mbytes_per_sec": 0, 00:08:08.195 "w_mbytes_per_sec": 0 00:08:08.195 }, 00:08:08.195 "claimed": false, 00:08:08.195 "zoned": false, 00:08:08.195 "supported_io_types": { 00:08:08.195 "read": true, 00:08:08.195 "write": true, 00:08:08.195 "unmap": true, 00:08:08.195 "flush": true, 00:08:08.195 "reset": true, 00:08:08.195 "nvme_admin": false, 00:08:08.195 "nvme_io": false, 00:08:08.195 "nvme_io_md": false, 00:08:08.195 "write_zeroes": true, 00:08:08.195 "zcopy": false, 00:08:08.195 "get_zone_info": false, 00:08:08.195 "zone_management": false, 00:08:08.195 "zone_append": false, 00:08:08.195 "compare": false, 00:08:08.195 "compare_and_write": false, 00:08:08.195 "abort": false, 00:08:08.195 "seek_hole": false, 00:08:08.195 "seek_data": false, 00:08:08.195 "copy": false, 00:08:08.195 "nvme_iov_md": false 00:08:08.195 }, 00:08:08.195 "memory_domains": [ 00:08:08.195 { 00:08:08.195 "dma_device_id": "system", 00:08:08.195 "dma_device_type": 1 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.195 "dma_device_type": 2 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "dma_device_id": "system", 00:08:08.195 "dma_device_type": 1 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.195 "dma_device_type": 2 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "dma_device_id": "system", 00:08:08.195 "dma_device_type": 1 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.195 "dma_device_type": 2 00:08:08.195 } 00:08:08.195 ], 00:08:08.195 "driver_specific": { 00:08:08.195 "raid": { 00:08:08.195 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:08.195 "strip_size_kb": 64, 00:08:08.195 "state": "online", 00:08:08.195 "raid_level": "raid0", 00:08:08.195 "superblock": true, 00:08:08.195 "num_base_bdevs": 3, 00:08:08.195 "num_base_bdevs_discovered": 3, 00:08:08.195 "num_base_bdevs_operational": 3, 00:08:08.195 "base_bdevs_list": [ 00:08:08.195 { 00:08:08.195 "name": "BaseBdev1", 00:08:08.195 "uuid": "76b3307e-8946-4581-8910-c5ef684531a1", 00:08:08.195 "is_configured": true, 00:08:08.195 "data_offset": 2048, 00:08:08.195 "data_size": 63488 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "name": "BaseBdev2", 00:08:08.195 "uuid": "d09321ab-b626-4ebf-b994-dc27384e9dfd", 00:08:08.195 "is_configured": true, 00:08:08.195 "data_offset": 2048, 00:08:08.195 "data_size": 63488 00:08:08.195 }, 00:08:08.195 { 00:08:08.195 "name": "BaseBdev3", 00:08:08.195 "uuid": "e889b553-79e4-41de-99dc-ff74719b4980", 00:08:08.195 "is_configured": true, 00:08:08.195 "data_offset": 2048, 00:08:08.195 "data_size": 63488 00:08:08.195 } 00:08:08.195 ] 00:08:08.195 } 00:08:08.195 } 00:08:08.195 }' 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.195 BaseBdev2 00:08:08.195 BaseBdev3' 00:08:08.195 17:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.195 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.195 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.195 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.195 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.195 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.196 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.196 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 [2024-11-20 17:00:32.183677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.455 [2024-11-20 17:00:32.183727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.455 [2024-11-20 17:00:32.183852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.455 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.714 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.714 "name": "Existed_Raid", 00:08:08.714 "uuid": "87bb3357-2f8d-422d-8911-c4fea3a76ed3", 00:08:08.714 "strip_size_kb": 64, 00:08:08.714 "state": "offline", 00:08:08.714 "raid_level": "raid0", 00:08:08.714 "superblock": true, 00:08:08.714 "num_base_bdevs": 3, 00:08:08.714 "num_base_bdevs_discovered": 2, 00:08:08.714 "num_base_bdevs_operational": 2, 00:08:08.714 "base_bdevs_list": [ 00:08:08.714 { 00:08:08.714 "name": null, 00:08:08.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.714 "is_configured": false, 00:08:08.714 "data_offset": 0, 00:08:08.714 "data_size": 63488 00:08:08.714 }, 00:08:08.714 { 00:08:08.714 "name": "BaseBdev2", 00:08:08.714 "uuid": "d09321ab-b626-4ebf-b994-dc27384e9dfd", 00:08:08.714 "is_configured": true, 00:08:08.714 "data_offset": 2048, 00:08:08.714 "data_size": 63488 00:08:08.714 }, 00:08:08.714 { 00:08:08.714 "name": "BaseBdev3", 00:08:08.714 "uuid": "e889b553-79e4-41de-99dc-ff74719b4980", 00:08:08.714 "is_configured": true, 00:08:08.714 "data_offset": 2048, 00:08:08.714 "data_size": 63488 00:08:08.714 } 00:08:08.714 ] 00:08:08.714 }' 00:08:08.714 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.714 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.974 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.974 [2024-11-20 17:00:32.821894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.233 17:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.233 [2024-11-20 17:00:32.973343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.233 [2024-11-20 17:00:32.973421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.233 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.234 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.493 BaseBdev2 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.493 [ 00:08:09.493 { 00:08:09.493 "name": "BaseBdev2", 00:08:09.493 "aliases": [ 00:08:09.493 "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36" 00:08:09.493 ], 00:08:09.493 "product_name": "Malloc disk", 00:08:09.493 "block_size": 512, 00:08:09.493 "num_blocks": 65536, 00:08:09.493 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:09.493 "assigned_rate_limits": { 00:08:09.493 "rw_ios_per_sec": 0, 00:08:09.493 "rw_mbytes_per_sec": 0, 00:08:09.493 "r_mbytes_per_sec": 0, 00:08:09.493 "w_mbytes_per_sec": 0 00:08:09.493 }, 00:08:09.493 "claimed": false, 00:08:09.493 "zoned": false, 00:08:09.493 "supported_io_types": { 00:08:09.493 "read": true, 00:08:09.493 "write": true, 00:08:09.493 "unmap": true, 00:08:09.493 "flush": true, 00:08:09.493 "reset": true, 00:08:09.493 "nvme_admin": false, 00:08:09.493 "nvme_io": false, 00:08:09.493 "nvme_io_md": false, 00:08:09.493 "write_zeroes": true, 00:08:09.493 "zcopy": true, 00:08:09.493 "get_zone_info": false, 00:08:09.493 "zone_management": false, 00:08:09.493 "zone_append": false, 00:08:09.493 "compare": false, 00:08:09.493 "compare_and_write": false, 00:08:09.493 "abort": true, 00:08:09.493 "seek_hole": false, 00:08:09.493 "seek_data": false, 00:08:09.493 "copy": true, 00:08:09.493 "nvme_iov_md": false 00:08:09.493 }, 00:08:09.493 "memory_domains": [ 00:08:09.493 { 00:08:09.493 "dma_device_id": "system", 00:08:09.493 "dma_device_type": 1 00:08:09.493 }, 00:08:09.493 { 00:08:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.493 "dma_device_type": 2 00:08:09.493 } 00:08:09.493 ], 00:08:09.493 "driver_specific": {} 00:08:09.493 } 00:08:09.493 ] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.493 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 BaseBdev3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 [ 00:08:09.494 { 00:08:09.494 "name": "BaseBdev3", 00:08:09.494 "aliases": [ 00:08:09.494 "3538c6d1-a912-4460-8943-e34433b562a4" 00:08:09.494 ], 00:08:09.494 "product_name": "Malloc disk", 00:08:09.494 "block_size": 512, 00:08:09.494 "num_blocks": 65536, 00:08:09.494 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:09.494 "assigned_rate_limits": { 00:08:09.494 "rw_ios_per_sec": 0, 00:08:09.494 "rw_mbytes_per_sec": 0, 00:08:09.494 "r_mbytes_per_sec": 0, 00:08:09.494 "w_mbytes_per_sec": 0 00:08:09.494 }, 00:08:09.494 "claimed": false, 00:08:09.494 "zoned": false, 00:08:09.494 "supported_io_types": { 00:08:09.494 "read": true, 00:08:09.494 "write": true, 00:08:09.494 "unmap": true, 00:08:09.494 "flush": true, 00:08:09.494 "reset": true, 00:08:09.494 "nvme_admin": false, 00:08:09.494 "nvme_io": false, 00:08:09.494 "nvme_io_md": false, 00:08:09.494 "write_zeroes": true, 00:08:09.494 "zcopy": true, 00:08:09.494 "get_zone_info": false, 00:08:09.494 "zone_management": false, 00:08:09.494 "zone_append": false, 00:08:09.494 "compare": false, 00:08:09.494 "compare_and_write": false, 00:08:09.494 "abort": true, 00:08:09.494 "seek_hole": false, 00:08:09.494 "seek_data": false, 00:08:09.494 "copy": true, 00:08:09.494 "nvme_iov_md": false 00:08:09.494 }, 00:08:09.494 "memory_domains": [ 00:08:09.494 { 00:08:09.494 "dma_device_id": "system", 00:08:09.494 "dma_device_type": 1 00:08:09.494 }, 00:08:09.494 { 00:08:09.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.494 "dma_device_type": 2 00:08:09.494 } 00:08:09.494 ], 00:08:09.494 "driver_specific": {} 00:08:09.494 } 00:08:09.494 ] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 [2024-11-20 17:00:33.277468] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.494 [2024-11-20 17:00:33.277520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.494 [2024-11-20 17:00:33.277552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.494 [2024-11-20 17:00:33.280138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.494 "name": "Existed_Raid", 00:08:09.494 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:09.494 "strip_size_kb": 64, 00:08:09.494 "state": "configuring", 00:08:09.494 "raid_level": "raid0", 00:08:09.494 "superblock": true, 00:08:09.494 "num_base_bdevs": 3, 00:08:09.494 "num_base_bdevs_discovered": 2, 00:08:09.494 "num_base_bdevs_operational": 3, 00:08:09.494 "base_bdevs_list": [ 00:08:09.494 { 00:08:09.494 "name": "BaseBdev1", 00:08:09.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.494 "is_configured": false, 00:08:09.494 "data_offset": 0, 00:08:09.494 "data_size": 0 00:08:09.494 }, 00:08:09.494 { 00:08:09.494 "name": "BaseBdev2", 00:08:09.494 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:09.494 "is_configured": true, 00:08:09.494 "data_offset": 2048, 00:08:09.494 "data_size": 63488 00:08:09.494 }, 00:08:09.494 { 00:08:09.494 "name": "BaseBdev3", 00:08:09.494 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:09.494 "is_configured": true, 00:08:09.494 "data_offset": 2048, 00:08:09.494 "data_size": 63488 00:08:09.494 } 00:08:09.494 ] 00:08:09.494 }' 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.494 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.063 [2024-11-20 17:00:33.829827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.063 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.063 "name": "Existed_Raid", 00:08:10.063 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:10.064 "strip_size_kb": 64, 00:08:10.064 "state": "configuring", 00:08:10.064 "raid_level": "raid0", 00:08:10.064 "superblock": true, 00:08:10.064 "num_base_bdevs": 3, 00:08:10.064 "num_base_bdevs_discovered": 1, 00:08:10.064 "num_base_bdevs_operational": 3, 00:08:10.064 "base_bdevs_list": [ 00:08:10.064 { 00:08:10.064 "name": "BaseBdev1", 00:08:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.064 "is_configured": false, 00:08:10.064 "data_offset": 0, 00:08:10.064 "data_size": 0 00:08:10.064 }, 00:08:10.064 { 00:08:10.064 "name": null, 00:08:10.064 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:10.064 "is_configured": false, 00:08:10.064 "data_offset": 0, 00:08:10.064 "data_size": 63488 00:08:10.064 }, 00:08:10.064 { 00:08:10.064 "name": "BaseBdev3", 00:08:10.064 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:10.064 "is_configured": true, 00:08:10.064 "data_offset": 2048, 00:08:10.064 "data_size": 63488 00:08:10.064 } 00:08:10.064 ] 00:08:10.064 }' 00:08:10.064 17:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.064 17:00:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.633 [2024-11-20 17:00:34.453973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.633 BaseBdev1 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.633 [ 00:08:10.633 { 00:08:10.633 "name": "BaseBdev1", 00:08:10.633 "aliases": [ 00:08:10.633 "b1947240-6425-4afa-a7ce-1238a7a7008b" 00:08:10.633 ], 00:08:10.633 "product_name": "Malloc disk", 00:08:10.633 "block_size": 512, 00:08:10.633 "num_blocks": 65536, 00:08:10.633 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:10.633 "assigned_rate_limits": { 00:08:10.633 "rw_ios_per_sec": 0, 00:08:10.633 "rw_mbytes_per_sec": 0, 00:08:10.633 "r_mbytes_per_sec": 0, 00:08:10.633 "w_mbytes_per_sec": 0 00:08:10.633 }, 00:08:10.633 "claimed": true, 00:08:10.633 "claim_type": "exclusive_write", 00:08:10.633 "zoned": false, 00:08:10.633 "supported_io_types": { 00:08:10.633 "read": true, 00:08:10.633 "write": true, 00:08:10.633 "unmap": true, 00:08:10.633 "flush": true, 00:08:10.633 "reset": true, 00:08:10.633 "nvme_admin": false, 00:08:10.633 "nvme_io": false, 00:08:10.633 "nvme_io_md": false, 00:08:10.633 "write_zeroes": true, 00:08:10.633 "zcopy": true, 00:08:10.633 "get_zone_info": false, 00:08:10.633 "zone_management": false, 00:08:10.633 "zone_append": false, 00:08:10.633 "compare": false, 00:08:10.633 "compare_and_write": false, 00:08:10.633 "abort": true, 00:08:10.633 "seek_hole": false, 00:08:10.633 "seek_data": false, 00:08:10.633 "copy": true, 00:08:10.633 "nvme_iov_md": false 00:08:10.633 }, 00:08:10.633 "memory_domains": [ 00:08:10.633 { 00:08:10.633 "dma_device_id": "system", 00:08:10.633 "dma_device_type": 1 00:08:10.633 }, 00:08:10.633 { 00:08:10.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.633 "dma_device_type": 2 00:08:10.633 } 00:08:10.633 ], 00:08:10.633 "driver_specific": {} 00:08:10.633 } 00:08:10.633 ] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.633 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.634 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.907 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.907 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.907 "name": "Existed_Raid", 00:08:10.907 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:10.907 "strip_size_kb": 64, 00:08:10.907 "state": "configuring", 00:08:10.907 "raid_level": "raid0", 00:08:10.907 "superblock": true, 00:08:10.907 "num_base_bdevs": 3, 00:08:10.907 "num_base_bdevs_discovered": 2, 00:08:10.907 "num_base_bdevs_operational": 3, 00:08:10.907 "base_bdevs_list": [ 00:08:10.907 { 00:08:10.907 "name": "BaseBdev1", 00:08:10.907 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:10.907 "is_configured": true, 00:08:10.907 "data_offset": 2048, 00:08:10.907 "data_size": 63488 00:08:10.907 }, 00:08:10.907 { 00:08:10.907 "name": null, 00:08:10.907 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:10.907 "is_configured": false, 00:08:10.907 "data_offset": 0, 00:08:10.907 "data_size": 63488 00:08:10.907 }, 00:08:10.907 { 00:08:10.907 "name": "BaseBdev3", 00:08:10.907 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:10.907 "is_configured": true, 00:08:10.907 "data_offset": 2048, 00:08:10.907 "data_size": 63488 00:08:10.907 } 00:08:10.907 ] 00:08:10.907 }' 00:08:10.907 17:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.907 17:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.171 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.171 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.171 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.171 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.171 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.430 [2024-11-20 17:00:35.078299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.430 "name": "Existed_Raid", 00:08:11.430 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:11.430 "strip_size_kb": 64, 00:08:11.430 "state": "configuring", 00:08:11.430 "raid_level": "raid0", 00:08:11.430 "superblock": true, 00:08:11.430 "num_base_bdevs": 3, 00:08:11.430 "num_base_bdevs_discovered": 1, 00:08:11.430 "num_base_bdevs_operational": 3, 00:08:11.430 "base_bdevs_list": [ 00:08:11.430 { 00:08:11.430 "name": "BaseBdev1", 00:08:11.430 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:11.430 "is_configured": true, 00:08:11.430 "data_offset": 2048, 00:08:11.430 "data_size": 63488 00:08:11.430 }, 00:08:11.430 { 00:08:11.430 "name": null, 00:08:11.430 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:11.430 "is_configured": false, 00:08:11.430 "data_offset": 0, 00:08:11.430 "data_size": 63488 00:08:11.430 }, 00:08:11.430 { 00:08:11.430 "name": null, 00:08:11.430 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:11.430 "is_configured": false, 00:08:11.430 "data_offset": 0, 00:08:11.430 "data_size": 63488 00:08:11.430 } 00:08:11.430 ] 00:08:11.430 }' 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.430 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.998 [2024-11-20 17:00:35.650541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.998 "name": "Existed_Raid", 00:08:11.998 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:11.998 "strip_size_kb": 64, 00:08:11.998 "state": "configuring", 00:08:11.998 "raid_level": "raid0", 00:08:11.998 "superblock": true, 00:08:11.998 "num_base_bdevs": 3, 00:08:11.998 "num_base_bdevs_discovered": 2, 00:08:11.998 "num_base_bdevs_operational": 3, 00:08:11.998 "base_bdevs_list": [ 00:08:11.998 { 00:08:11.998 "name": "BaseBdev1", 00:08:11.998 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:11.998 "is_configured": true, 00:08:11.998 "data_offset": 2048, 00:08:11.998 "data_size": 63488 00:08:11.998 }, 00:08:11.998 { 00:08:11.998 "name": null, 00:08:11.998 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:11.998 "is_configured": false, 00:08:11.998 "data_offset": 0, 00:08:11.998 "data_size": 63488 00:08:11.998 }, 00:08:11.998 { 00:08:11.998 "name": "BaseBdev3", 00:08:11.998 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:11.998 "is_configured": true, 00:08:11.998 "data_offset": 2048, 00:08:11.998 "data_size": 63488 00:08:11.998 } 00:08:11.998 ] 00:08:11.998 }' 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.998 17:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.566 [2024-11-20 17:00:36.206730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.566 "name": "Existed_Raid", 00:08:12.566 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:12.566 "strip_size_kb": 64, 00:08:12.566 "state": "configuring", 00:08:12.566 "raid_level": "raid0", 00:08:12.566 "superblock": true, 00:08:12.566 "num_base_bdevs": 3, 00:08:12.566 "num_base_bdevs_discovered": 1, 00:08:12.566 "num_base_bdevs_operational": 3, 00:08:12.566 "base_bdevs_list": [ 00:08:12.566 { 00:08:12.566 "name": null, 00:08:12.566 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:12.566 "is_configured": false, 00:08:12.566 "data_offset": 0, 00:08:12.566 "data_size": 63488 00:08:12.566 }, 00:08:12.566 { 00:08:12.566 "name": null, 00:08:12.566 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:12.566 "is_configured": false, 00:08:12.566 "data_offset": 0, 00:08:12.566 "data_size": 63488 00:08:12.566 }, 00:08:12.566 { 00:08:12.566 "name": "BaseBdev3", 00:08:12.566 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:12.566 "is_configured": true, 00:08:12.566 "data_offset": 2048, 00:08:12.566 "data_size": 63488 00:08:12.566 } 00:08:12.566 ] 00:08:12.566 }' 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.566 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.134 [2024-11-20 17:00:36.861994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.134 "name": "Existed_Raid", 00:08:13.134 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:13.134 "strip_size_kb": 64, 00:08:13.134 "state": "configuring", 00:08:13.134 "raid_level": "raid0", 00:08:13.134 "superblock": true, 00:08:13.134 "num_base_bdevs": 3, 00:08:13.134 "num_base_bdevs_discovered": 2, 00:08:13.134 "num_base_bdevs_operational": 3, 00:08:13.134 "base_bdevs_list": [ 00:08:13.134 { 00:08:13.134 "name": null, 00:08:13.134 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:13.134 "is_configured": false, 00:08:13.134 "data_offset": 0, 00:08:13.134 "data_size": 63488 00:08:13.134 }, 00:08:13.134 { 00:08:13.134 "name": "BaseBdev2", 00:08:13.134 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:13.134 "is_configured": true, 00:08:13.134 "data_offset": 2048, 00:08:13.134 "data_size": 63488 00:08:13.134 }, 00:08:13.134 { 00:08:13.134 "name": "BaseBdev3", 00:08:13.134 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:13.134 "is_configured": true, 00:08:13.134 "data_offset": 2048, 00:08:13.134 "data_size": 63488 00:08:13.134 } 00:08:13.134 ] 00:08:13.134 }' 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.134 17:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b1947240-6425-4afa-a7ce-1238a7a7008b 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 [2024-11-20 17:00:37.526810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:13.702 [2024-11-20 17:00:37.527095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:13.702 [2024-11-20 17:00:37.527119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:13.702 [2024-11-20 17:00:37.527434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.702 NewBaseBdev 00:08:13.702 [2024-11-20 17:00:37.527622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:13.702 [2024-11-20 17:00:37.527637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:13.702 [2024-11-20 17:00:37.527818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.702 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.702 [ 00:08:13.702 { 00:08:13.702 "name": "NewBaseBdev", 00:08:13.702 "aliases": [ 00:08:13.702 "b1947240-6425-4afa-a7ce-1238a7a7008b" 00:08:13.702 ], 00:08:13.702 "product_name": "Malloc disk", 00:08:13.702 "block_size": 512, 00:08:13.702 "num_blocks": 65536, 00:08:13.702 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:13.702 "assigned_rate_limits": { 00:08:13.702 "rw_ios_per_sec": 0, 00:08:13.702 "rw_mbytes_per_sec": 0, 00:08:13.702 "r_mbytes_per_sec": 0, 00:08:13.702 "w_mbytes_per_sec": 0 00:08:13.702 }, 00:08:13.702 "claimed": true, 00:08:13.702 "claim_type": "exclusive_write", 00:08:13.702 "zoned": false, 00:08:13.702 "supported_io_types": { 00:08:13.702 "read": true, 00:08:13.702 "write": true, 00:08:13.702 "unmap": true, 00:08:13.702 "flush": true, 00:08:13.702 "reset": true, 00:08:13.702 "nvme_admin": false, 00:08:13.702 "nvme_io": false, 00:08:13.703 "nvme_io_md": false, 00:08:13.703 "write_zeroes": true, 00:08:13.703 "zcopy": true, 00:08:13.703 "get_zone_info": false, 00:08:13.703 "zone_management": false, 00:08:13.703 "zone_append": false, 00:08:13.703 "compare": false, 00:08:13.703 "compare_and_write": false, 00:08:13.703 "abort": true, 00:08:13.703 "seek_hole": false, 00:08:13.703 "seek_data": false, 00:08:13.703 "copy": true, 00:08:13.703 "nvme_iov_md": false 00:08:13.703 }, 00:08:13.703 "memory_domains": [ 00:08:13.703 { 00:08:13.703 "dma_device_id": "system", 00:08:13.703 "dma_device_type": 1 00:08:13.703 }, 00:08:13.703 { 00:08:13.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.703 "dma_device_type": 2 00:08:13.703 } 00:08:13.703 ], 00:08:13.703 "driver_specific": {} 00:08:13.703 } 00:08:13.703 ] 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.703 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.962 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.962 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.962 "name": "Existed_Raid", 00:08:13.962 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:13.962 "strip_size_kb": 64, 00:08:13.962 "state": "online", 00:08:13.962 "raid_level": "raid0", 00:08:13.962 "superblock": true, 00:08:13.962 "num_base_bdevs": 3, 00:08:13.962 "num_base_bdevs_discovered": 3, 00:08:13.962 "num_base_bdevs_operational": 3, 00:08:13.962 "base_bdevs_list": [ 00:08:13.962 { 00:08:13.962 "name": "NewBaseBdev", 00:08:13.962 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:13.962 "is_configured": true, 00:08:13.962 "data_offset": 2048, 00:08:13.962 "data_size": 63488 00:08:13.962 }, 00:08:13.962 { 00:08:13.962 "name": "BaseBdev2", 00:08:13.962 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:13.962 "is_configured": true, 00:08:13.962 "data_offset": 2048, 00:08:13.962 "data_size": 63488 00:08:13.962 }, 00:08:13.962 { 00:08:13.962 "name": "BaseBdev3", 00:08:13.962 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:13.962 "is_configured": true, 00:08:13.962 "data_offset": 2048, 00:08:13.962 "data_size": 63488 00:08:13.962 } 00:08:13.962 ] 00:08:13.962 }' 00:08:13.962 17:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.962 17:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.220 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.480 [2024-11-20 17:00:38.095552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.480 "name": "Existed_Raid", 00:08:14.480 "aliases": [ 00:08:14.480 "0d1ad142-5112-4083-9899-551e8fad7f84" 00:08:14.480 ], 00:08:14.480 "product_name": "Raid Volume", 00:08:14.480 "block_size": 512, 00:08:14.480 "num_blocks": 190464, 00:08:14.480 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:14.480 "assigned_rate_limits": { 00:08:14.480 "rw_ios_per_sec": 0, 00:08:14.480 "rw_mbytes_per_sec": 0, 00:08:14.480 "r_mbytes_per_sec": 0, 00:08:14.480 "w_mbytes_per_sec": 0 00:08:14.480 }, 00:08:14.480 "claimed": false, 00:08:14.480 "zoned": false, 00:08:14.480 "supported_io_types": { 00:08:14.480 "read": true, 00:08:14.480 "write": true, 00:08:14.480 "unmap": true, 00:08:14.480 "flush": true, 00:08:14.480 "reset": true, 00:08:14.480 "nvme_admin": false, 00:08:14.480 "nvme_io": false, 00:08:14.480 "nvme_io_md": false, 00:08:14.480 "write_zeroes": true, 00:08:14.480 "zcopy": false, 00:08:14.480 "get_zone_info": false, 00:08:14.480 "zone_management": false, 00:08:14.480 "zone_append": false, 00:08:14.480 "compare": false, 00:08:14.480 "compare_and_write": false, 00:08:14.480 "abort": false, 00:08:14.480 "seek_hole": false, 00:08:14.480 "seek_data": false, 00:08:14.480 "copy": false, 00:08:14.480 "nvme_iov_md": false 00:08:14.480 }, 00:08:14.480 "memory_domains": [ 00:08:14.480 { 00:08:14.480 "dma_device_id": "system", 00:08:14.480 "dma_device_type": 1 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.480 "dma_device_type": 2 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "dma_device_id": "system", 00:08:14.480 "dma_device_type": 1 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.480 "dma_device_type": 2 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "dma_device_id": "system", 00:08:14.480 "dma_device_type": 1 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.480 "dma_device_type": 2 00:08:14.480 } 00:08:14.480 ], 00:08:14.480 "driver_specific": { 00:08:14.480 "raid": { 00:08:14.480 "uuid": "0d1ad142-5112-4083-9899-551e8fad7f84", 00:08:14.480 "strip_size_kb": 64, 00:08:14.480 "state": "online", 00:08:14.480 "raid_level": "raid0", 00:08:14.480 "superblock": true, 00:08:14.480 "num_base_bdevs": 3, 00:08:14.480 "num_base_bdevs_discovered": 3, 00:08:14.480 "num_base_bdevs_operational": 3, 00:08:14.480 "base_bdevs_list": [ 00:08:14.480 { 00:08:14.480 "name": "NewBaseBdev", 00:08:14.480 "uuid": "b1947240-6425-4afa-a7ce-1238a7a7008b", 00:08:14.480 "is_configured": true, 00:08:14.480 "data_offset": 2048, 00:08:14.480 "data_size": 63488 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "name": "BaseBdev2", 00:08:14.480 "uuid": "96f3a00c-2f56-4e8f-89f2-d3e7a3894e36", 00:08:14.480 "is_configured": true, 00:08:14.480 "data_offset": 2048, 00:08:14.480 "data_size": 63488 00:08:14.480 }, 00:08:14.480 { 00:08:14.480 "name": "BaseBdev3", 00:08:14.480 "uuid": "3538c6d1-a912-4460-8943-e34433b562a4", 00:08:14.480 "is_configured": true, 00:08:14.480 "data_offset": 2048, 00:08:14.480 "data_size": 63488 00:08:14.480 } 00:08:14.480 ] 00:08:14.480 } 00:08:14.480 } 00:08:14.480 }' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:14.480 BaseBdev2 00:08:14.480 BaseBdev3' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.480 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.740 [2024-11-20 17:00:38.407212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.740 [2024-11-20 17:00:38.407245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.740 [2024-11-20 17:00:38.407404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.740 [2024-11-20 17:00:38.407473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.740 [2024-11-20 17:00:38.407491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64246 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64246 ']' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64246 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64246 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.740 killing process with pid 64246 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64246' 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64246 00:08:14.740 [2024-11-20 17:00:38.450053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.740 17:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64246 00:08:14.999 [2024-11-20 17:00:38.726632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.377 17:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:16.377 00:08:16.377 real 0m11.910s 00:08:16.377 user 0m19.813s 00:08:16.377 sys 0m1.590s 00:08:16.377 ************************************ 00:08:16.377 END TEST raid_state_function_test_sb 00:08:16.377 ************************************ 00:08:16.377 17:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.377 17:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.377 17:00:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:16.377 17:00:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:16.377 17:00:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.377 17:00:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.378 ************************************ 00:08:16.378 START TEST raid_superblock_test 00:08:16.378 ************************************ 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64883 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64883 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64883 ']' 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.378 17:00:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.378 [2024-11-20 17:00:39.983197] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:16.378 [2024-11-20 17:00:39.983415] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64883 ] 00:08:16.378 [2024-11-20 17:00:40.172434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.637 [2024-11-20 17:00:40.320651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.897 [2024-11-20 17:00:40.537985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.897 [2024-11-20 17:00:40.538031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.465 malloc1 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.465 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.465 [2024-11-20 17:00:41.095927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:17.465 [2024-11-20 17:00:41.096218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.465 [2024-11-20 17:00:41.096278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:17.465 [2024-11-20 17:00:41.096301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.465 [2024-11-20 17:00:41.099962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.465 [2024-11-20 17:00:41.100019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:17.466 pt1 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 malloc2 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 [2024-11-20 17:00:41.162156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:17.466 [2024-11-20 17:00:41.162249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.466 [2024-11-20 17:00:41.162294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:17.466 [2024-11-20 17:00:41.162313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.466 [2024-11-20 17:00:41.165972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.466 [2024-11-20 17:00:41.166028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:17.466 pt2 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 malloc3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 [2024-11-20 17:00:41.240266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:17.466 [2024-11-20 17:00:41.240353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.466 [2024-11-20 17:00:41.240418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:17.466 [2024-11-20 17:00:41.240456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.466 [2024-11-20 17:00:41.243955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.466 [2024-11-20 17:00:41.244010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:17.466 pt3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 [2024-11-20 17:00:41.252410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:17.466 [2024-11-20 17:00:41.255528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:17.466 [2024-11-20 17:00:41.255870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:17.466 [2024-11-20 17:00:41.256145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:17.466 [2024-11-20 17:00:41.256175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.466 [2024-11-20 17:00:41.256548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:17.466 [2024-11-20 17:00:41.256859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:17.466 [2024-11-20 17:00:41.256887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:17.466 [2024-11-20 17:00:41.257175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.466 "name": "raid_bdev1", 00:08:17.466 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:17.466 "strip_size_kb": 64, 00:08:17.466 "state": "online", 00:08:17.466 "raid_level": "raid0", 00:08:17.466 "superblock": true, 00:08:17.466 "num_base_bdevs": 3, 00:08:17.466 "num_base_bdevs_discovered": 3, 00:08:17.466 "num_base_bdevs_operational": 3, 00:08:17.466 "base_bdevs_list": [ 00:08:17.466 { 00:08:17.466 "name": "pt1", 00:08:17.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.466 "is_configured": true, 00:08:17.466 "data_offset": 2048, 00:08:17.466 "data_size": 63488 00:08:17.466 }, 00:08:17.466 { 00:08:17.466 "name": "pt2", 00:08:17.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.466 "is_configured": true, 00:08:17.466 "data_offset": 2048, 00:08:17.466 "data_size": 63488 00:08:17.466 }, 00:08:17.466 { 00:08:17.466 "name": "pt3", 00:08:17.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:17.466 "is_configured": true, 00:08:17.466 "data_offset": 2048, 00:08:17.466 "data_size": 63488 00:08:17.466 } 00:08:17.466 ] 00:08:17.466 }' 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.466 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.034 [2024-11-20 17:00:41.769697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.034 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.034 "name": "raid_bdev1", 00:08:18.034 "aliases": [ 00:08:18.034 "ee30fd39-65c2-4faa-a192-c596b33fe5fa" 00:08:18.034 ], 00:08:18.034 "product_name": "Raid Volume", 00:08:18.034 "block_size": 512, 00:08:18.034 "num_blocks": 190464, 00:08:18.034 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:18.034 "assigned_rate_limits": { 00:08:18.034 "rw_ios_per_sec": 0, 00:08:18.034 "rw_mbytes_per_sec": 0, 00:08:18.034 "r_mbytes_per_sec": 0, 00:08:18.034 "w_mbytes_per_sec": 0 00:08:18.034 }, 00:08:18.034 "claimed": false, 00:08:18.034 "zoned": false, 00:08:18.034 "supported_io_types": { 00:08:18.034 "read": true, 00:08:18.034 "write": true, 00:08:18.034 "unmap": true, 00:08:18.034 "flush": true, 00:08:18.034 "reset": true, 00:08:18.034 "nvme_admin": false, 00:08:18.034 "nvme_io": false, 00:08:18.034 "nvme_io_md": false, 00:08:18.034 "write_zeroes": true, 00:08:18.034 "zcopy": false, 00:08:18.035 "get_zone_info": false, 00:08:18.035 "zone_management": false, 00:08:18.035 "zone_append": false, 00:08:18.035 "compare": false, 00:08:18.035 "compare_and_write": false, 00:08:18.035 "abort": false, 00:08:18.035 "seek_hole": false, 00:08:18.035 "seek_data": false, 00:08:18.035 "copy": false, 00:08:18.035 "nvme_iov_md": false 00:08:18.035 }, 00:08:18.035 "memory_domains": [ 00:08:18.035 { 00:08:18.035 "dma_device_id": "system", 00:08:18.035 "dma_device_type": 1 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.035 "dma_device_type": 2 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "system", 00:08:18.035 "dma_device_type": 1 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.035 "dma_device_type": 2 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "system", 00:08:18.035 "dma_device_type": 1 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.035 "dma_device_type": 2 00:08:18.035 } 00:08:18.035 ], 00:08:18.035 "driver_specific": { 00:08:18.035 "raid": { 00:08:18.035 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:18.035 "strip_size_kb": 64, 00:08:18.035 "state": "online", 00:08:18.035 "raid_level": "raid0", 00:08:18.035 "superblock": true, 00:08:18.035 "num_base_bdevs": 3, 00:08:18.035 "num_base_bdevs_discovered": 3, 00:08:18.035 "num_base_bdevs_operational": 3, 00:08:18.035 "base_bdevs_list": [ 00:08:18.035 { 00:08:18.035 "name": "pt1", 00:08:18.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.035 "is_configured": true, 00:08:18.035 "data_offset": 2048, 00:08:18.035 "data_size": 63488 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "name": "pt2", 00:08:18.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.035 "is_configured": true, 00:08:18.035 "data_offset": 2048, 00:08:18.035 "data_size": 63488 00:08:18.035 }, 00:08:18.035 { 00:08:18.035 "name": "pt3", 00:08:18.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.035 "is_configured": true, 00:08:18.035 "data_offset": 2048, 00:08:18.035 "data_size": 63488 00:08:18.035 } 00:08:18.035 ] 00:08:18.035 } 00:08:18.035 } 00:08:18.035 }' 00:08:18.035 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.035 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:18.035 pt2 00:08:18.035 pt3' 00:08:18.035 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.294 17:00:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.294 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.294 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.294 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.294 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:18.294 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.295 [2024-11-20 17:00:42.089801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ee30fd39-65c2-4faa-a192-c596b33fe5fa 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ee30fd39-65c2-4faa-a192-c596b33fe5fa ']' 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.295 [2024-11-20 17:00:42.137360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.295 [2024-11-20 17:00:42.137389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.295 [2024-11-20 17:00:42.137474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.295 [2024-11-20 17:00:42.137545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.295 [2024-11-20 17:00:42.137559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.295 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.566 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.566 [2024-11-20 17:00:42.305491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:18.566 [2024-11-20 17:00:42.308010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:18.566 [2024-11-20 17:00:42.308216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:18.566 [2024-11-20 17:00:42.308302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:18.566 [2024-11-20 17:00:42.308373] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:18.566 [2024-11-20 17:00:42.308409] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:18.566 [2024-11-20 17:00:42.308436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.566 [2024-11-20 17:00:42.308451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:18.566 request: 00:08:18.566 { 00:08:18.566 "name": "raid_bdev1", 00:08:18.567 "raid_level": "raid0", 00:08:18.567 "base_bdevs": [ 00:08:18.567 "malloc1", 00:08:18.567 "malloc2", 00:08:18.567 "malloc3" 00:08:18.567 ], 00:08:18.567 "strip_size_kb": 64, 00:08:18.567 "superblock": false, 00:08:18.567 "method": "bdev_raid_create", 00:08:18.567 "req_id": 1 00:08:18.567 } 00:08:18.567 Got JSON-RPC error response 00:08:18.567 response: 00:08:18.567 { 00:08:18.567 "code": -17, 00:08:18.567 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:18.567 } 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 [2024-11-20 17:00:42.369471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.567 [2024-11-20 17:00:42.369540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.567 [2024-11-20 17:00:42.369566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:18.567 [2024-11-20 17:00:42.369579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.567 [2024-11-20 17:00:42.372747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.567 [2024-11-20 17:00:42.372945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.567 [2024-11-20 17:00:42.373076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:18.567 [2024-11-20 17:00:42.373142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.567 pt1 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.567 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.838 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.838 "name": "raid_bdev1", 00:08:18.838 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:18.838 "strip_size_kb": 64, 00:08:18.838 "state": "configuring", 00:08:18.838 "raid_level": "raid0", 00:08:18.838 "superblock": true, 00:08:18.838 "num_base_bdevs": 3, 00:08:18.838 "num_base_bdevs_discovered": 1, 00:08:18.838 "num_base_bdevs_operational": 3, 00:08:18.838 "base_bdevs_list": [ 00:08:18.838 { 00:08:18.838 "name": "pt1", 00:08:18.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.838 "is_configured": true, 00:08:18.838 "data_offset": 2048, 00:08:18.838 "data_size": 63488 00:08:18.838 }, 00:08:18.838 { 00:08:18.838 "name": null, 00:08:18.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.838 "is_configured": false, 00:08:18.838 "data_offset": 2048, 00:08:18.838 "data_size": 63488 00:08:18.838 }, 00:08:18.838 { 00:08:18.838 "name": null, 00:08:18.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.838 "is_configured": false, 00:08:18.838 "data_offset": 2048, 00:08:18.838 "data_size": 63488 00:08:18.838 } 00:08:18.838 ] 00:08:18.838 }' 00:08:18.838 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.838 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 [2024-11-20 17:00:42.901712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.097 [2024-11-20 17:00:42.901821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.097 [2024-11-20 17:00:42.901862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:19.097 [2024-11-20 17:00:42.901877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.097 [2024-11-20 17:00:42.902428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.097 [2024-11-20 17:00:42.902473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.097 [2024-11-20 17:00:42.902581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.097 [2024-11-20 17:00:42.902620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.097 pt2 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 [2024-11-20 17:00:42.909736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.097 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.356 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.356 "name": "raid_bdev1", 00:08:19.356 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:19.356 "strip_size_kb": 64, 00:08:19.356 "state": "configuring", 00:08:19.356 "raid_level": "raid0", 00:08:19.356 "superblock": true, 00:08:19.356 "num_base_bdevs": 3, 00:08:19.356 "num_base_bdevs_discovered": 1, 00:08:19.356 "num_base_bdevs_operational": 3, 00:08:19.356 "base_bdevs_list": [ 00:08:19.356 { 00:08:19.356 "name": "pt1", 00:08:19.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.356 "is_configured": true, 00:08:19.356 "data_offset": 2048, 00:08:19.356 "data_size": 63488 00:08:19.356 }, 00:08:19.356 { 00:08:19.356 "name": null, 00:08:19.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.356 "is_configured": false, 00:08:19.356 "data_offset": 0, 00:08:19.356 "data_size": 63488 00:08:19.356 }, 00:08:19.356 { 00:08:19.356 "name": null, 00:08:19.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.356 "is_configured": false, 00:08:19.356 "data_offset": 2048, 00:08:19.356 "data_size": 63488 00:08:19.356 } 00:08:19.356 ] 00:08:19.356 }' 00:08:19.356 17:00:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.356 17:00:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.616 [2024-11-20 17:00:43.458027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.616 [2024-11-20 17:00:43.458334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.616 [2024-11-20 17:00:43.458372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:19.616 [2024-11-20 17:00:43.458390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.616 [2024-11-20 17:00:43.459022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.616 [2024-11-20 17:00:43.459055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.616 [2024-11-20 17:00:43.459152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.616 [2024-11-20 17:00:43.459232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.616 pt2 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.616 [2024-11-20 17:00:43.465992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:19.616 [2024-11-20 17:00:43.466053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.616 [2024-11-20 17:00:43.466075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:19.616 [2024-11-20 17:00:43.466091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.616 [2024-11-20 17:00:43.466595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.616 [2024-11-20 17:00:43.466635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:19.616 [2024-11-20 17:00:43.466752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:19.616 [2024-11-20 17:00:43.466784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:19.616 [2024-11-20 17:00:43.466946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.616 [2024-11-20 17:00:43.466975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:19.616 [2024-11-20 17:00:43.467288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:19.616 [2024-11-20 17:00:43.467509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.616 [2024-11-20 17:00:43.467533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.616 [2024-11-20 17:00:43.467699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.616 pt3 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.616 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.875 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.875 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.875 "name": "raid_bdev1", 00:08:19.875 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:19.875 "strip_size_kb": 64, 00:08:19.875 "state": "online", 00:08:19.875 "raid_level": "raid0", 00:08:19.875 "superblock": true, 00:08:19.875 "num_base_bdevs": 3, 00:08:19.875 "num_base_bdevs_discovered": 3, 00:08:19.875 "num_base_bdevs_operational": 3, 00:08:19.875 "base_bdevs_list": [ 00:08:19.875 { 00:08:19.875 "name": "pt1", 00:08:19.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.875 "is_configured": true, 00:08:19.875 "data_offset": 2048, 00:08:19.875 "data_size": 63488 00:08:19.875 }, 00:08:19.875 { 00:08:19.875 "name": "pt2", 00:08:19.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.875 "is_configured": true, 00:08:19.875 "data_offset": 2048, 00:08:19.875 "data_size": 63488 00:08:19.875 }, 00:08:19.875 { 00:08:19.875 "name": "pt3", 00:08:19.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.875 "is_configured": true, 00:08:19.875 "data_offset": 2048, 00:08:19.875 "data_size": 63488 00:08:19.875 } 00:08:19.875 ] 00:08:19.875 }' 00:08:19.875 17:00:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.875 17:00:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 [2024-11-20 17:00:44.014586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.442 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.442 "name": "raid_bdev1", 00:08:20.442 "aliases": [ 00:08:20.442 "ee30fd39-65c2-4faa-a192-c596b33fe5fa" 00:08:20.442 ], 00:08:20.442 "product_name": "Raid Volume", 00:08:20.442 "block_size": 512, 00:08:20.442 "num_blocks": 190464, 00:08:20.442 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:20.442 "assigned_rate_limits": { 00:08:20.442 "rw_ios_per_sec": 0, 00:08:20.442 "rw_mbytes_per_sec": 0, 00:08:20.442 "r_mbytes_per_sec": 0, 00:08:20.442 "w_mbytes_per_sec": 0 00:08:20.442 }, 00:08:20.442 "claimed": false, 00:08:20.442 "zoned": false, 00:08:20.442 "supported_io_types": { 00:08:20.442 "read": true, 00:08:20.442 "write": true, 00:08:20.442 "unmap": true, 00:08:20.442 "flush": true, 00:08:20.442 "reset": true, 00:08:20.442 "nvme_admin": false, 00:08:20.442 "nvme_io": false, 00:08:20.442 "nvme_io_md": false, 00:08:20.442 "write_zeroes": true, 00:08:20.442 "zcopy": false, 00:08:20.442 "get_zone_info": false, 00:08:20.442 "zone_management": false, 00:08:20.442 "zone_append": false, 00:08:20.442 "compare": false, 00:08:20.442 "compare_and_write": false, 00:08:20.442 "abort": false, 00:08:20.442 "seek_hole": false, 00:08:20.442 "seek_data": false, 00:08:20.442 "copy": false, 00:08:20.442 "nvme_iov_md": false 00:08:20.442 }, 00:08:20.442 "memory_domains": [ 00:08:20.442 { 00:08:20.442 "dma_device_id": "system", 00:08:20.442 "dma_device_type": 1 00:08:20.442 }, 00:08:20.442 { 00:08:20.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.443 "dma_device_type": 2 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "dma_device_id": "system", 00:08:20.443 "dma_device_type": 1 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.443 "dma_device_type": 2 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "dma_device_id": "system", 00:08:20.443 "dma_device_type": 1 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.443 "dma_device_type": 2 00:08:20.443 } 00:08:20.443 ], 00:08:20.443 "driver_specific": { 00:08:20.443 "raid": { 00:08:20.443 "uuid": "ee30fd39-65c2-4faa-a192-c596b33fe5fa", 00:08:20.443 "strip_size_kb": 64, 00:08:20.443 "state": "online", 00:08:20.443 "raid_level": "raid0", 00:08:20.443 "superblock": true, 00:08:20.443 "num_base_bdevs": 3, 00:08:20.443 "num_base_bdevs_discovered": 3, 00:08:20.443 "num_base_bdevs_operational": 3, 00:08:20.443 "base_bdevs_list": [ 00:08:20.443 { 00:08:20.443 "name": "pt1", 00:08:20.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.443 "is_configured": true, 00:08:20.443 "data_offset": 2048, 00:08:20.443 "data_size": 63488 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "name": "pt2", 00:08:20.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.443 "is_configured": true, 00:08:20.443 "data_offset": 2048, 00:08:20.443 "data_size": 63488 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "name": "pt3", 00:08:20.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.443 "is_configured": true, 00:08:20.443 "data_offset": 2048, 00:08:20.443 "data_size": 63488 00:08:20.443 } 00:08:20.443 ] 00:08:20.443 } 00:08:20.443 } 00:08:20.443 }' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.443 pt2 00:08:20.443 pt3' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.443 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.702 [2024-11-20 17:00:44.330696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ee30fd39-65c2-4faa-a192-c596b33fe5fa '!=' ee30fd39-65c2-4faa-a192-c596b33fe5fa ']' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64883 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64883 ']' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64883 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64883 00:08:20.702 killing process with pid 64883 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64883' 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64883 00:08:20.702 [2024-11-20 17:00:44.412312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.702 17:00:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64883 00:08:20.702 [2024-11-20 17:00:44.412435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.702 [2024-11-20 17:00:44.412509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.702 [2024-11-20 17:00:44.412529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.961 [2024-11-20 17:00:44.699458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.336 17:00:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:22.336 00:08:22.336 real 0m5.953s 00:08:22.336 user 0m8.962s 00:08:22.336 sys 0m0.843s 00:08:22.336 17:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.337 17:00:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.337 ************************************ 00:08:22.337 END TEST raid_superblock_test 00:08:22.337 ************************************ 00:08:22.337 17:00:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:22.337 17:00:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.337 17:00:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.337 17:00:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.337 ************************************ 00:08:22.337 START TEST raid_read_error_test 00:08:22.337 ************************************ 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KTUaBVjJ79 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65147 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65147 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65147 ']' 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.337 17:00:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.337 [2024-11-20 17:00:46.004007] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:22.337 [2024-11-20 17:00:46.004184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65147 ] 00:08:22.337 [2024-11-20 17:00:46.195383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.596 [2024-11-20 17:00:46.356122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.855 [2024-11-20 17:00:46.579292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.855 [2024-11-20 17:00:46.579388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 BaseBdev1_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 true 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 [2024-11-20 17:00:47.173219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.424 [2024-11-20 17:00:47.173303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.424 [2024-11-20 17:00:47.173333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.424 [2024-11-20 17:00:47.173350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.424 [2024-11-20 17:00:47.176262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.424 [2024-11-20 17:00:47.176314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.424 BaseBdev1 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 BaseBdev2_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 true 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 [2024-11-20 17:00:47.237082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.424 [2024-11-20 17:00:47.237149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.424 [2024-11-20 17:00:47.237174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.424 [2024-11-20 17:00:47.237191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.424 [2024-11-20 17:00:47.240231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.424 [2024-11-20 17:00:47.240291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.424 BaseBdev2 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.682 BaseBdev3_malloc 00:08:23.682 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.682 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:23.682 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.682 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.682 true 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.683 [2024-11-20 17:00:47.315510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:23.683 [2024-11-20 17:00:47.315573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.683 [2024-11-20 17:00:47.315600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:23.683 [2024-11-20 17:00:47.315617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.683 [2024-11-20 17:00:47.318646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.683 [2024-11-20 17:00:47.318697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:23.683 BaseBdev3 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.683 [2024-11-20 17:00:47.323632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.683 [2024-11-20 17:00:47.326189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.683 [2024-11-20 17:00:47.326308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.683 [2024-11-20 17:00:47.326564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:23.683 [2024-11-20 17:00:47.326586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:23.683 [2024-11-20 17:00:47.326913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:23.683 [2024-11-20 17:00:47.327132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:23.683 [2024-11-20 17:00:47.327154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:23.683 [2024-11-20 17:00:47.327332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.683 "name": "raid_bdev1", 00:08:23.683 "uuid": "1cadd4c1-3646-429a-b719-5648bdab52e1", 00:08:23.683 "strip_size_kb": 64, 00:08:23.683 "state": "online", 00:08:23.683 "raid_level": "raid0", 00:08:23.683 "superblock": true, 00:08:23.683 "num_base_bdevs": 3, 00:08:23.683 "num_base_bdevs_discovered": 3, 00:08:23.683 "num_base_bdevs_operational": 3, 00:08:23.683 "base_bdevs_list": [ 00:08:23.683 { 00:08:23.683 "name": "BaseBdev1", 00:08:23.683 "uuid": "545af02f-7a4c-58d1-a38a-bebcb42fc0d5", 00:08:23.683 "is_configured": true, 00:08:23.683 "data_offset": 2048, 00:08:23.683 "data_size": 63488 00:08:23.683 }, 00:08:23.683 { 00:08:23.683 "name": "BaseBdev2", 00:08:23.683 "uuid": "ad2167dd-ed5d-5ca0-a5d5-ddce60bc1060", 00:08:23.683 "is_configured": true, 00:08:23.683 "data_offset": 2048, 00:08:23.683 "data_size": 63488 00:08:23.683 }, 00:08:23.683 { 00:08:23.683 "name": "BaseBdev3", 00:08:23.683 "uuid": "c6a40c12-b607-5db8-a451-248c19ddab6f", 00:08:23.683 "is_configured": true, 00:08:23.683 "data_offset": 2048, 00:08:23.683 "data_size": 63488 00:08:23.683 } 00:08:23.683 ] 00:08:23.683 }' 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.683 17:00:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.250 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:24.250 17:00:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:24.250 [2024-11-20 17:00:47.977297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.187 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.188 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.188 "name": "raid_bdev1", 00:08:25.188 "uuid": "1cadd4c1-3646-429a-b719-5648bdab52e1", 00:08:25.188 "strip_size_kb": 64, 00:08:25.188 "state": "online", 00:08:25.188 "raid_level": "raid0", 00:08:25.188 "superblock": true, 00:08:25.188 "num_base_bdevs": 3, 00:08:25.188 "num_base_bdevs_discovered": 3, 00:08:25.188 "num_base_bdevs_operational": 3, 00:08:25.188 "base_bdevs_list": [ 00:08:25.188 { 00:08:25.188 "name": "BaseBdev1", 00:08:25.188 "uuid": "545af02f-7a4c-58d1-a38a-bebcb42fc0d5", 00:08:25.188 "is_configured": true, 00:08:25.188 "data_offset": 2048, 00:08:25.188 "data_size": 63488 00:08:25.188 }, 00:08:25.188 { 00:08:25.188 "name": "BaseBdev2", 00:08:25.188 "uuid": "ad2167dd-ed5d-5ca0-a5d5-ddce60bc1060", 00:08:25.188 "is_configured": true, 00:08:25.188 "data_offset": 2048, 00:08:25.188 "data_size": 63488 00:08:25.188 }, 00:08:25.188 { 00:08:25.188 "name": "BaseBdev3", 00:08:25.188 "uuid": "c6a40c12-b607-5db8-a451-248c19ddab6f", 00:08:25.188 "is_configured": true, 00:08:25.188 "data_offset": 2048, 00:08:25.188 "data_size": 63488 00:08:25.188 } 00:08:25.188 ] 00:08:25.188 }' 00:08:25.188 17:00:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.188 17:00:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.756 [2024-11-20 17:00:49.380221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.756 [2024-11-20 17:00:49.380253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.756 [2024-11-20 17:00:49.384067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.756 [2024-11-20 17:00:49.384226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.756 [2024-11-20 17:00:49.384298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.756 [2024-11-20 17:00:49.384313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:25.756 { 00:08:25.756 "results": [ 00:08:25.756 { 00:08:25.756 "job": "raid_bdev1", 00:08:25.756 "core_mask": "0x1", 00:08:25.756 "workload": "randrw", 00:08:25.756 "percentage": 50, 00:08:25.756 "status": "finished", 00:08:25.756 "queue_depth": 1, 00:08:25.756 "io_size": 131072, 00:08:25.756 "runtime": 1.400066, 00:08:25.756 "iops": 9898.81905567309, 00:08:25.756 "mibps": 1237.3523819591362, 00:08:25.756 "io_failed": 1, 00:08:25.756 "io_timeout": 0, 00:08:25.756 "avg_latency_us": 140.10900905155452, 00:08:25.756 "min_latency_us": 26.88, 00:08:25.756 "max_latency_us": 2010.7636363636364 00:08:25.756 } 00:08:25.756 ], 00:08:25.756 "core_count": 1 00:08:25.756 } 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65147 00:08:25.756 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65147 ']' 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65147 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65147 00:08:25.757 killing process with pid 65147 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65147' 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65147 00:08:25.757 [2024-11-20 17:00:49.423833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.757 17:00:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65147 00:08:26.016 [2024-11-20 17:00:49.634127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KTUaBVjJ79 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:26.954 00:08:26.954 real 0m4.936s 00:08:26.954 user 0m6.195s 00:08:26.954 sys 0m0.600s 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.954 ************************************ 00:08:26.954 END TEST raid_read_error_test 00:08:26.954 ************************************ 00:08:26.954 17:00:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.213 17:00:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:27.213 17:00:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.213 17:00:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.213 17:00:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.213 ************************************ 00:08:27.213 START TEST raid_write_error_test 00:08:27.213 ************************************ 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:27.213 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XRUC42E4zP 00:08:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65293 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65293 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65293 ']' 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.214 17:00:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.214 [2024-11-20 17:00:50.989476] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:27.214 [2024-11-20 17:00:50.989882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65293 ] 00:08:27.481 [2024-11-20 17:00:51.180510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.764 [2024-11-20 17:00:51.344024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.764 [2024-11-20 17:00:51.562173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.764 [2024-11-20 17:00:51.562221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 BaseBdev1_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 true 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 [2024-11-20 17:00:52.053282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.334 [2024-11-20 17:00:52.053476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.334 [2024-11-20 17:00:52.053549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:28.334 [2024-11-20 17:00:52.053815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.334 [2024-11-20 17:00:52.056619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.334 [2024-11-20 17:00:52.056840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.334 BaseBdev1 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 BaseBdev2_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 true 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 [2024-11-20 17:00:52.121501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.334 [2024-11-20 17:00:52.121578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.334 [2024-11-20 17:00:52.121603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.334 [2024-11-20 17:00:52.121619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.334 [2024-11-20 17:00:52.124579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.334 [2024-11-20 17:00:52.124639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.334 BaseBdev2 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 BaseBdev3_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 true 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 [2024-11-20 17:00:52.184369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:28.334 [2024-11-20 17:00:52.184443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.334 [2024-11-20 17:00:52.184467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:28.334 [2024-11-20 17:00:52.184484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.334 [2024-11-20 17:00:52.187278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.334 [2024-11-20 17:00:52.187346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:28.334 BaseBdev3 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 [2024-11-20 17:00:52.192454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.334 [2024-11-20 17:00:52.194963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.334 [2024-11-20 17:00:52.195227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.334 [2024-11-20 17:00:52.195575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.334 [2024-11-20 17:00:52.195637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:28.334 [2024-11-20 17:00:52.196180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:28.334 [2024-11-20 17:00:52.196459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.334 [2024-11-20 17:00:52.196517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:28.334 [2024-11-20 17:00:52.196977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:28.334 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.593 "name": "raid_bdev1", 00:08:28.593 "uuid": "9ca446bb-466f-4034-a546-594ff4717750", 00:08:28.593 "strip_size_kb": 64, 00:08:28.593 "state": "online", 00:08:28.593 "raid_level": "raid0", 00:08:28.593 "superblock": true, 00:08:28.593 "num_base_bdevs": 3, 00:08:28.593 "num_base_bdevs_discovered": 3, 00:08:28.593 "num_base_bdevs_operational": 3, 00:08:28.593 "base_bdevs_list": [ 00:08:28.593 { 00:08:28.593 "name": "BaseBdev1", 00:08:28.593 "uuid": "d01c6bc8-1e1b-5ca9-8c69-fd7654c3c3ed", 00:08:28.593 "is_configured": true, 00:08:28.593 "data_offset": 2048, 00:08:28.593 "data_size": 63488 00:08:28.593 }, 00:08:28.593 { 00:08:28.593 "name": "BaseBdev2", 00:08:28.593 "uuid": "7e172171-298b-5bbd-ab35-722f8d489f7e", 00:08:28.593 "is_configured": true, 00:08:28.593 "data_offset": 2048, 00:08:28.593 "data_size": 63488 00:08:28.593 }, 00:08:28.593 { 00:08:28.593 "name": "BaseBdev3", 00:08:28.593 "uuid": "8528ddca-e880-57b5-a0d4-758470f3c2de", 00:08:28.593 "is_configured": true, 00:08:28.593 "data_offset": 2048, 00:08:28.593 "data_size": 63488 00:08:28.593 } 00:08:28.593 ] 00:08:28.593 }' 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.593 17:00:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.853 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:28.853 17:00:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:29.112 [2024-11-20 17:00:52.810274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.051 "name": "raid_bdev1", 00:08:30.051 "uuid": "9ca446bb-466f-4034-a546-594ff4717750", 00:08:30.051 "strip_size_kb": 64, 00:08:30.051 "state": "online", 00:08:30.051 "raid_level": "raid0", 00:08:30.051 "superblock": true, 00:08:30.051 "num_base_bdevs": 3, 00:08:30.051 "num_base_bdevs_discovered": 3, 00:08:30.051 "num_base_bdevs_operational": 3, 00:08:30.051 "base_bdevs_list": [ 00:08:30.051 { 00:08:30.051 "name": "BaseBdev1", 00:08:30.051 "uuid": "d01c6bc8-1e1b-5ca9-8c69-fd7654c3c3ed", 00:08:30.051 "is_configured": true, 00:08:30.051 "data_offset": 2048, 00:08:30.051 "data_size": 63488 00:08:30.051 }, 00:08:30.051 { 00:08:30.051 "name": "BaseBdev2", 00:08:30.051 "uuid": "7e172171-298b-5bbd-ab35-722f8d489f7e", 00:08:30.051 "is_configured": true, 00:08:30.051 "data_offset": 2048, 00:08:30.051 "data_size": 63488 00:08:30.051 }, 00:08:30.051 { 00:08:30.051 "name": "BaseBdev3", 00:08:30.051 "uuid": "8528ddca-e880-57b5-a0d4-758470f3c2de", 00:08:30.051 "is_configured": true, 00:08:30.051 "data_offset": 2048, 00:08:30.051 "data_size": 63488 00:08:30.051 } 00:08:30.051 ] 00:08:30.051 }' 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.051 17:00:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.620 [2024-11-20 17:00:54.230212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.620 [2024-11-20 17:00:54.230439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.620 [2024-11-20 17:00:54.235035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.620 [2024-11-20 17:00:54.235462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.620 { 00:08:30.620 "results": [ 00:08:30.620 { 00:08:30.620 "job": "raid_bdev1", 00:08:30.620 "core_mask": "0x1", 00:08:30.620 "workload": "randrw", 00:08:30.620 "percentage": 50, 00:08:30.620 "status": "finished", 00:08:30.620 "queue_depth": 1, 00:08:30.620 "io_size": 131072, 00:08:30.620 "runtime": 1.41819, 00:08:30.620 "iops": 11148.717731756677, 00:08:30.620 "mibps": 1393.5897164695846, 00:08:30.620 "io_failed": 1, 00:08:30.620 "io_timeout": 0, 00:08:30.620 "avg_latency_us": 125.13524457834096, 00:08:30.620 "min_latency_us": 27.46181818181818, 00:08:30.620 "max_latency_us": 1675.6363636363637 00:08:30.620 } 00:08:30.620 ], 00:08:30.620 "core_count": 1 00:08:30.620 } 00:08:30.620 [2024-11-20 17:00:54.235763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.620 [2024-11-20 17:00:54.235838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65293 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65293 ']' 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65293 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65293 00:08:30.620 killing process with pid 65293 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65293' 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65293 00:08:30.620 [2024-11-20 17:00:54.275210] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.620 17:00:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65293 00:08:30.880 [2024-11-20 17:00:54.508867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XRUC42E4zP 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:31.818 00:08:31.818 real 0m4.647s 00:08:31.818 user 0m5.737s 00:08:31.818 sys 0m0.599s 00:08:31.818 ************************************ 00:08:31.818 END TEST raid_write_error_test 00:08:31.818 ************************************ 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.818 17:00:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.818 17:00:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:31.818 17:00:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:31.818 17:00:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.818 17:00:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.818 17:00:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.818 ************************************ 00:08:31.818 START TEST raid_state_function_test 00:08:31.818 ************************************ 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:31.818 Process raid pid: 65436 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65436 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65436' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65436 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65436 ']' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.818 17:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.078 [2024-11-20 17:00:55.687884] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:32.078 [2024-11-20 17:00:55.688315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.078 [2024-11-20 17:00:55.869677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.337 [2024-11-20 17:00:55.984756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.337 [2024-11-20 17:00:56.178769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.337 [2024-11-20 17:00:56.178819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 [2024-11-20 17:00:56.647315] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.905 [2024-11-20 17:00:56.647400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.905 [2024-11-20 17:00:56.647419] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.905 [2024-11-20 17:00:56.647435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.905 [2024-11-20 17:00:56.647445] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.905 [2024-11-20 17:00:56.647460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.905 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.906 "name": "Existed_Raid", 00:08:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.906 "strip_size_kb": 64, 00:08:32.906 "state": "configuring", 00:08:32.906 "raid_level": "concat", 00:08:32.906 "superblock": false, 00:08:32.906 "num_base_bdevs": 3, 00:08:32.906 "num_base_bdevs_discovered": 0, 00:08:32.906 "num_base_bdevs_operational": 3, 00:08:32.906 "base_bdevs_list": [ 00:08:32.906 { 00:08:32.906 "name": "BaseBdev1", 00:08:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.906 "is_configured": false, 00:08:32.906 "data_offset": 0, 00:08:32.906 "data_size": 0 00:08:32.906 }, 00:08:32.906 { 00:08:32.906 "name": "BaseBdev2", 00:08:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.906 "is_configured": false, 00:08:32.906 "data_offset": 0, 00:08:32.906 "data_size": 0 00:08:32.906 }, 00:08:32.906 { 00:08:32.906 "name": "BaseBdev3", 00:08:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.906 "is_configured": false, 00:08:32.906 "data_offset": 0, 00:08:32.906 "data_size": 0 00:08:32.906 } 00:08:32.906 ] 00:08:32.906 }' 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.906 17:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.474 [2024-11-20 17:00:57.171467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.474 [2024-11-20 17:00:57.171519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.474 [2024-11-20 17:00:57.183453] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.474 [2024-11-20 17:00:57.183637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.474 [2024-11-20 17:00:57.183823] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.474 [2024-11-20 17:00:57.183974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.474 [2024-11-20 17:00:57.184110] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.474 [2024-11-20 17:00:57.184233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.474 [2024-11-20 17:00:57.228747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.474 BaseBdev1 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.474 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.475 [ 00:08:33.475 { 00:08:33.475 "name": "BaseBdev1", 00:08:33.475 "aliases": [ 00:08:33.475 "3d81c039-f014-4cca-9d7f-6c8128aaa465" 00:08:33.475 ], 00:08:33.475 "product_name": "Malloc disk", 00:08:33.475 "block_size": 512, 00:08:33.475 "num_blocks": 65536, 00:08:33.475 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:33.475 "assigned_rate_limits": { 00:08:33.475 "rw_ios_per_sec": 0, 00:08:33.475 "rw_mbytes_per_sec": 0, 00:08:33.475 "r_mbytes_per_sec": 0, 00:08:33.475 "w_mbytes_per_sec": 0 00:08:33.475 }, 00:08:33.475 "claimed": true, 00:08:33.475 "claim_type": "exclusive_write", 00:08:33.475 "zoned": false, 00:08:33.475 "supported_io_types": { 00:08:33.475 "read": true, 00:08:33.475 "write": true, 00:08:33.475 "unmap": true, 00:08:33.475 "flush": true, 00:08:33.475 "reset": true, 00:08:33.475 "nvme_admin": false, 00:08:33.475 "nvme_io": false, 00:08:33.475 "nvme_io_md": false, 00:08:33.475 "write_zeroes": true, 00:08:33.475 "zcopy": true, 00:08:33.475 "get_zone_info": false, 00:08:33.475 "zone_management": false, 00:08:33.475 "zone_append": false, 00:08:33.475 "compare": false, 00:08:33.475 "compare_and_write": false, 00:08:33.475 "abort": true, 00:08:33.475 "seek_hole": false, 00:08:33.475 "seek_data": false, 00:08:33.475 "copy": true, 00:08:33.475 "nvme_iov_md": false 00:08:33.475 }, 00:08:33.475 "memory_domains": [ 00:08:33.475 { 00:08:33.475 "dma_device_id": "system", 00:08:33.475 "dma_device_type": 1 00:08:33.475 }, 00:08:33.475 { 00:08:33.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.475 "dma_device_type": 2 00:08:33.475 } 00:08:33.475 ], 00:08:33.475 "driver_specific": {} 00:08:33.475 } 00:08:33.475 ] 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.475 "name": "Existed_Raid", 00:08:33.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.475 "strip_size_kb": 64, 00:08:33.475 "state": "configuring", 00:08:33.475 "raid_level": "concat", 00:08:33.475 "superblock": false, 00:08:33.475 "num_base_bdevs": 3, 00:08:33.475 "num_base_bdevs_discovered": 1, 00:08:33.475 "num_base_bdevs_operational": 3, 00:08:33.475 "base_bdevs_list": [ 00:08:33.475 { 00:08:33.475 "name": "BaseBdev1", 00:08:33.475 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:33.475 "is_configured": true, 00:08:33.475 "data_offset": 0, 00:08:33.475 "data_size": 65536 00:08:33.475 }, 00:08:33.475 { 00:08:33.475 "name": "BaseBdev2", 00:08:33.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.475 "is_configured": false, 00:08:33.475 "data_offset": 0, 00:08:33.475 "data_size": 0 00:08:33.475 }, 00:08:33.475 { 00:08:33.475 "name": "BaseBdev3", 00:08:33.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.475 "is_configured": false, 00:08:33.475 "data_offset": 0, 00:08:33.475 "data_size": 0 00:08:33.475 } 00:08:33.475 ] 00:08:33.475 }' 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.475 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.043 [2024-11-20 17:00:57.797049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.043 [2024-11-20 17:00:57.797126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.043 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 [2024-11-20 17:00:57.805069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.044 [2024-11-20 17:00:57.807899] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.044 [2024-11-20 17:00:57.807974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.044 [2024-11-20 17:00:57.807990] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.044 [2024-11-20 17:00:57.808005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.044 "name": "Existed_Raid", 00:08:34.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.044 "strip_size_kb": 64, 00:08:34.044 "state": "configuring", 00:08:34.044 "raid_level": "concat", 00:08:34.044 "superblock": false, 00:08:34.044 "num_base_bdevs": 3, 00:08:34.044 "num_base_bdevs_discovered": 1, 00:08:34.044 "num_base_bdevs_operational": 3, 00:08:34.044 "base_bdevs_list": [ 00:08:34.044 { 00:08:34.044 "name": "BaseBdev1", 00:08:34.044 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:34.044 "is_configured": true, 00:08:34.044 "data_offset": 0, 00:08:34.044 "data_size": 65536 00:08:34.044 }, 00:08:34.044 { 00:08:34.044 "name": "BaseBdev2", 00:08:34.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.044 "is_configured": false, 00:08:34.044 "data_offset": 0, 00:08:34.044 "data_size": 0 00:08:34.044 }, 00:08:34.044 { 00:08:34.044 "name": "BaseBdev3", 00:08:34.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.044 "is_configured": false, 00:08:34.044 "data_offset": 0, 00:08:34.044 "data_size": 0 00:08:34.044 } 00:08:34.044 ] 00:08:34.044 }' 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.044 17:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.612 [2024-11-20 17:00:58.372409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.612 BaseBdev2 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.612 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.613 [ 00:08:34.613 { 00:08:34.613 "name": "BaseBdev2", 00:08:34.613 "aliases": [ 00:08:34.613 "b2e15368-258e-4087-91af-fc4a129daf16" 00:08:34.613 ], 00:08:34.613 "product_name": "Malloc disk", 00:08:34.613 "block_size": 512, 00:08:34.613 "num_blocks": 65536, 00:08:34.613 "uuid": "b2e15368-258e-4087-91af-fc4a129daf16", 00:08:34.613 "assigned_rate_limits": { 00:08:34.613 "rw_ios_per_sec": 0, 00:08:34.613 "rw_mbytes_per_sec": 0, 00:08:34.613 "r_mbytes_per_sec": 0, 00:08:34.613 "w_mbytes_per_sec": 0 00:08:34.613 }, 00:08:34.613 "claimed": true, 00:08:34.613 "claim_type": "exclusive_write", 00:08:34.613 "zoned": false, 00:08:34.613 "supported_io_types": { 00:08:34.613 "read": true, 00:08:34.613 "write": true, 00:08:34.613 "unmap": true, 00:08:34.613 "flush": true, 00:08:34.613 "reset": true, 00:08:34.613 "nvme_admin": false, 00:08:34.613 "nvme_io": false, 00:08:34.613 "nvme_io_md": false, 00:08:34.613 "write_zeroes": true, 00:08:34.613 "zcopy": true, 00:08:34.613 "get_zone_info": false, 00:08:34.613 "zone_management": false, 00:08:34.613 "zone_append": false, 00:08:34.613 "compare": false, 00:08:34.613 "compare_and_write": false, 00:08:34.613 "abort": true, 00:08:34.613 "seek_hole": false, 00:08:34.613 "seek_data": false, 00:08:34.613 "copy": true, 00:08:34.613 "nvme_iov_md": false 00:08:34.613 }, 00:08:34.613 "memory_domains": [ 00:08:34.613 { 00:08:34.613 "dma_device_id": "system", 00:08:34.613 "dma_device_type": 1 00:08:34.613 }, 00:08:34.613 { 00:08:34.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.613 "dma_device_type": 2 00:08:34.613 } 00:08:34.613 ], 00:08:34.613 "driver_specific": {} 00:08:34.613 } 00:08:34.613 ] 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.613 "name": "Existed_Raid", 00:08:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.613 "strip_size_kb": 64, 00:08:34.613 "state": "configuring", 00:08:34.613 "raid_level": "concat", 00:08:34.613 "superblock": false, 00:08:34.613 "num_base_bdevs": 3, 00:08:34.613 "num_base_bdevs_discovered": 2, 00:08:34.613 "num_base_bdevs_operational": 3, 00:08:34.613 "base_bdevs_list": [ 00:08:34.613 { 00:08:34.613 "name": "BaseBdev1", 00:08:34.613 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:34.613 "is_configured": true, 00:08:34.613 "data_offset": 0, 00:08:34.613 "data_size": 65536 00:08:34.613 }, 00:08:34.613 { 00:08:34.613 "name": "BaseBdev2", 00:08:34.613 "uuid": "b2e15368-258e-4087-91af-fc4a129daf16", 00:08:34.613 "is_configured": true, 00:08:34.613 "data_offset": 0, 00:08:34.613 "data_size": 65536 00:08:34.613 }, 00:08:34.613 { 00:08:34.613 "name": "BaseBdev3", 00:08:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.613 "is_configured": false, 00:08:34.613 "data_offset": 0, 00:08:34.613 "data_size": 0 00:08:34.613 } 00:08:34.613 ] 00:08:34.613 }' 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.613 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.182 [2024-11-20 17:00:58.963627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.182 [2024-11-20 17:00:58.963738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.182 [2024-11-20 17:00:58.963756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.182 [2024-11-20 17:00:58.964170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:35.182 [2024-11-20 17:00:58.964444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.182 [2024-11-20 17:00:58.964468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.182 [2024-11-20 17:00:58.964793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.182 BaseBdev3 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.182 [ 00:08:35.182 { 00:08:35.182 "name": "BaseBdev3", 00:08:35.182 "aliases": [ 00:08:35.182 "8dd2d042-e2c9-453e-8e9a-3f41e572daae" 00:08:35.182 ], 00:08:35.182 "product_name": "Malloc disk", 00:08:35.182 "block_size": 512, 00:08:35.182 "num_blocks": 65536, 00:08:35.182 "uuid": "8dd2d042-e2c9-453e-8e9a-3f41e572daae", 00:08:35.182 "assigned_rate_limits": { 00:08:35.182 "rw_ios_per_sec": 0, 00:08:35.182 "rw_mbytes_per_sec": 0, 00:08:35.182 "r_mbytes_per_sec": 0, 00:08:35.182 "w_mbytes_per_sec": 0 00:08:35.182 }, 00:08:35.182 "claimed": true, 00:08:35.182 "claim_type": "exclusive_write", 00:08:35.182 "zoned": false, 00:08:35.182 "supported_io_types": { 00:08:35.182 "read": true, 00:08:35.182 "write": true, 00:08:35.182 "unmap": true, 00:08:35.182 "flush": true, 00:08:35.182 "reset": true, 00:08:35.182 "nvme_admin": false, 00:08:35.182 "nvme_io": false, 00:08:35.182 "nvme_io_md": false, 00:08:35.182 "write_zeroes": true, 00:08:35.182 "zcopy": true, 00:08:35.182 "get_zone_info": false, 00:08:35.182 "zone_management": false, 00:08:35.182 "zone_append": false, 00:08:35.182 "compare": false, 00:08:35.182 "compare_and_write": false, 00:08:35.182 "abort": true, 00:08:35.182 "seek_hole": false, 00:08:35.182 "seek_data": false, 00:08:35.182 "copy": true, 00:08:35.182 "nvme_iov_md": false 00:08:35.182 }, 00:08:35.182 "memory_domains": [ 00:08:35.182 { 00:08:35.182 "dma_device_id": "system", 00:08:35.182 "dma_device_type": 1 00:08:35.182 }, 00:08:35.182 { 00:08:35.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.182 "dma_device_type": 2 00:08:35.182 } 00:08:35.182 ], 00:08:35.182 "driver_specific": {} 00:08:35.182 } 00:08:35.182 ] 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.182 17:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.182 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.440 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.440 "name": "Existed_Raid", 00:08:35.440 "uuid": "f3e9b565-9368-49ae-b908-96d1edf5c310", 00:08:35.440 "strip_size_kb": 64, 00:08:35.440 "state": "online", 00:08:35.440 "raid_level": "concat", 00:08:35.440 "superblock": false, 00:08:35.440 "num_base_bdevs": 3, 00:08:35.440 "num_base_bdevs_discovered": 3, 00:08:35.440 "num_base_bdevs_operational": 3, 00:08:35.440 "base_bdevs_list": [ 00:08:35.440 { 00:08:35.440 "name": "BaseBdev1", 00:08:35.440 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:35.440 "is_configured": true, 00:08:35.440 "data_offset": 0, 00:08:35.440 "data_size": 65536 00:08:35.440 }, 00:08:35.440 { 00:08:35.440 "name": "BaseBdev2", 00:08:35.440 "uuid": "b2e15368-258e-4087-91af-fc4a129daf16", 00:08:35.440 "is_configured": true, 00:08:35.440 "data_offset": 0, 00:08:35.440 "data_size": 65536 00:08:35.440 }, 00:08:35.440 { 00:08:35.440 "name": "BaseBdev3", 00:08:35.440 "uuid": "8dd2d042-e2c9-453e-8e9a-3f41e572daae", 00:08:35.440 "is_configured": true, 00:08:35.440 "data_offset": 0, 00:08:35.440 "data_size": 65536 00:08:35.440 } 00:08:35.440 ] 00:08:35.440 }' 00:08:35.440 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.440 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.698 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.698 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.698 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.699 [2024-11-20 17:00:59.536378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.699 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.957 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.958 "name": "Existed_Raid", 00:08:35.958 "aliases": [ 00:08:35.958 "f3e9b565-9368-49ae-b908-96d1edf5c310" 00:08:35.958 ], 00:08:35.958 "product_name": "Raid Volume", 00:08:35.958 "block_size": 512, 00:08:35.958 "num_blocks": 196608, 00:08:35.958 "uuid": "f3e9b565-9368-49ae-b908-96d1edf5c310", 00:08:35.958 "assigned_rate_limits": { 00:08:35.958 "rw_ios_per_sec": 0, 00:08:35.958 "rw_mbytes_per_sec": 0, 00:08:35.958 "r_mbytes_per_sec": 0, 00:08:35.958 "w_mbytes_per_sec": 0 00:08:35.958 }, 00:08:35.958 "claimed": false, 00:08:35.958 "zoned": false, 00:08:35.958 "supported_io_types": { 00:08:35.958 "read": true, 00:08:35.958 "write": true, 00:08:35.958 "unmap": true, 00:08:35.958 "flush": true, 00:08:35.958 "reset": true, 00:08:35.958 "nvme_admin": false, 00:08:35.958 "nvme_io": false, 00:08:35.958 "nvme_io_md": false, 00:08:35.958 "write_zeroes": true, 00:08:35.958 "zcopy": false, 00:08:35.958 "get_zone_info": false, 00:08:35.958 "zone_management": false, 00:08:35.958 "zone_append": false, 00:08:35.958 "compare": false, 00:08:35.958 "compare_and_write": false, 00:08:35.958 "abort": false, 00:08:35.958 "seek_hole": false, 00:08:35.958 "seek_data": false, 00:08:35.958 "copy": false, 00:08:35.958 "nvme_iov_md": false 00:08:35.958 }, 00:08:35.958 "memory_domains": [ 00:08:35.958 { 00:08:35.958 "dma_device_id": "system", 00:08:35.958 "dma_device_type": 1 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.958 "dma_device_type": 2 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "dma_device_id": "system", 00:08:35.958 "dma_device_type": 1 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.958 "dma_device_type": 2 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "dma_device_id": "system", 00:08:35.958 "dma_device_type": 1 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.958 "dma_device_type": 2 00:08:35.958 } 00:08:35.958 ], 00:08:35.958 "driver_specific": { 00:08:35.958 "raid": { 00:08:35.958 "uuid": "f3e9b565-9368-49ae-b908-96d1edf5c310", 00:08:35.958 "strip_size_kb": 64, 00:08:35.958 "state": "online", 00:08:35.958 "raid_level": "concat", 00:08:35.958 "superblock": false, 00:08:35.958 "num_base_bdevs": 3, 00:08:35.958 "num_base_bdevs_discovered": 3, 00:08:35.958 "num_base_bdevs_operational": 3, 00:08:35.958 "base_bdevs_list": [ 00:08:35.958 { 00:08:35.958 "name": "BaseBdev1", 00:08:35.958 "uuid": "3d81c039-f014-4cca-9d7f-6c8128aaa465", 00:08:35.958 "is_configured": true, 00:08:35.958 "data_offset": 0, 00:08:35.958 "data_size": 65536 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "name": "BaseBdev2", 00:08:35.958 "uuid": "b2e15368-258e-4087-91af-fc4a129daf16", 00:08:35.958 "is_configured": true, 00:08:35.958 "data_offset": 0, 00:08:35.958 "data_size": 65536 00:08:35.958 }, 00:08:35.958 { 00:08:35.958 "name": "BaseBdev3", 00:08:35.958 "uuid": "8dd2d042-e2c9-453e-8e9a-3f41e572daae", 00:08:35.958 "is_configured": true, 00:08:35.958 "data_offset": 0, 00:08:35.958 "data_size": 65536 00:08:35.958 } 00:08:35.958 ] 00:08:35.958 } 00:08:35.958 } 00:08:35.958 }' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.958 BaseBdev2 00:08:35.958 BaseBdev3' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.958 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.217 [2024-11-20 17:00:59.912080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.217 [2024-11-20 17:00:59.912116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.217 [2024-11-20 17:00:59.912189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.217 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:36.218 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.218 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.218 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.218 17:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.218 "name": "Existed_Raid", 00:08:36.218 "uuid": "f3e9b565-9368-49ae-b908-96d1edf5c310", 00:08:36.218 "strip_size_kb": 64, 00:08:36.218 "state": "offline", 00:08:36.218 "raid_level": "concat", 00:08:36.218 "superblock": false, 00:08:36.218 "num_base_bdevs": 3, 00:08:36.218 "num_base_bdevs_discovered": 2, 00:08:36.218 "num_base_bdevs_operational": 2, 00:08:36.218 "base_bdevs_list": [ 00:08:36.218 { 00:08:36.218 "name": null, 00:08:36.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.218 "is_configured": false, 00:08:36.218 "data_offset": 0, 00:08:36.218 "data_size": 65536 00:08:36.218 }, 00:08:36.218 { 00:08:36.218 "name": "BaseBdev2", 00:08:36.218 "uuid": "b2e15368-258e-4087-91af-fc4a129daf16", 00:08:36.218 "is_configured": true, 00:08:36.218 "data_offset": 0, 00:08:36.218 "data_size": 65536 00:08:36.218 }, 00:08:36.218 { 00:08:36.218 "name": "BaseBdev3", 00:08:36.218 "uuid": "8dd2d042-e2c9-453e-8e9a-3f41e572daae", 00:08:36.218 "is_configured": true, 00:08:36.218 "data_offset": 0, 00:08:36.218 "data_size": 65536 00:08:36.218 } 00:08:36.218 ] 00:08:36.218 }' 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.218 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.786 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.786 [2024-11-20 17:01:00.586531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.045 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.045 [2024-11-20 17:01:00.754396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.045 [2024-11-20 17:01:00.754460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.046 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.305 BaseBdev2 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.305 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.305 [ 00:08:37.305 { 00:08:37.305 "name": "BaseBdev2", 00:08:37.305 "aliases": [ 00:08:37.305 "8719e9a6-169b-44b2-abf2-3b78bc9ece12" 00:08:37.305 ], 00:08:37.305 "product_name": "Malloc disk", 00:08:37.305 "block_size": 512, 00:08:37.305 "num_blocks": 65536, 00:08:37.305 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:37.305 "assigned_rate_limits": { 00:08:37.305 "rw_ios_per_sec": 0, 00:08:37.305 "rw_mbytes_per_sec": 0, 00:08:37.305 "r_mbytes_per_sec": 0, 00:08:37.305 "w_mbytes_per_sec": 0 00:08:37.305 }, 00:08:37.305 "claimed": false, 00:08:37.305 "zoned": false, 00:08:37.305 "supported_io_types": { 00:08:37.305 "read": true, 00:08:37.305 "write": true, 00:08:37.305 "unmap": true, 00:08:37.305 "flush": true, 00:08:37.305 "reset": true, 00:08:37.305 "nvme_admin": false, 00:08:37.305 "nvme_io": false, 00:08:37.305 "nvme_io_md": false, 00:08:37.305 "write_zeroes": true, 00:08:37.305 "zcopy": true, 00:08:37.305 "get_zone_info": false, 00:08:37.305 "zone_management": false, 00:08:37.305 "zone_append": false, 00:08:37.305 "compare": false, 00:08:37.305 "compare_and_write": false, 00:08:37.305 "abort": true, 00:08:37.305 "seek_hole": false, 00:08:37.305 "seek_data": false, 00:08:37.305 "copy": true, 00:08:37.305 "nvme_iov_md": false 00:08:37.305 }, 00:08:37.305 "memory_domains": [ 00:08:37.305 { 00:08:37.305 "dma_device_id": "system", 00:08:37.305 "dma_device_type": 1 00:08:37.305 }, 00:08:37.305 { 00:08:37.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.306 "dma_device_type": 2 00:08:37.306 } 00:08:37.306 ], 00:08:37.306 "driver_specific": {} 00:08:37.306 } 00:08:37.306 ] 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.306 17:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.306 BaseBdev3 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.306 [ 00:08:37.306 { 00:08:37.306 "name": "BaseBdev3", 00:08:37.306 "aliases": [ 00:08:37.306 "599c5943-afbd-4d6e-bbed-3f7282283aa3" 00:08:37.306 ], 00:08:37.306 "product_name": "Malloc disk", 00:08:37.306 "block_size": 512, 00:08:37.306 "num_blocks": 65536, 00:08:37.306 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:37.306 "assigned_rate_limits": { 00:08:37.306 "rw_ios_per_sec": 0, 00:08:37.306 "rw_mbytes_per_sec": 0, 00:08:37.306 "r_mbytes_per_sec": 0, 00:08:37.306 "w_mbytes_per_sec": 0 00:08:37.306 }, 00:08:37.306 "claimed": false, 00:08:37.306 "zoned": false, 00:08:37.306 "supported_io_types": { 00:08:37.306 "read": true, 00:08:37.306 "write": true, 00:08:37.306 "unmap": true, 00:08:37.306 "flush": true, 00:08:37.306 "reset": true, 00:08:37.306 "nvme_admin": false, 00:08:37.306 "nvme_io": false, 00:08:37.306 "nvme_io_md": false, 00:08:37.306 "write_zeroes": true, 00:08:37.306 "zcopy": true, 00:08:37.306 "get_zone_info": false, 00:08:37.306 "zone_management": false, 00:08:37.306 "zone_append": false, 00:08:37.306 "compare": false, 00:08:37.306 "compare_and_write": false, 00:08:37.306 "abort": true, 00:08:37.306 "seek_hole": false, 00:08:37.306 "seek_data": false, 00:08:37.306 "copy": true, 00:08:37.306 "nvme_iov_md": false 00:08:37.306 }, 00:08:37.306 "memory_domains": [ 00:08:37.306 { 00:08:37.306 "dma_device_id": "system", 00:08:37.306 "dma_device_type": 1 00:08:37.306 }, 00:08:37.306 { 00:08:37.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.306 "dma_device_type": 2 00:08:37.306 } 00:08:37.306 ], 00:08:37.306 "driver_specific": {} 00:08:37.306 } 00:08:37.306 ] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.306 [2024-11-20 17:01:01.043942] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.306 [2024-11-20 17:01:01.043995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.306 [2024-11-20 17:01:01.044026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.306 [2024-11-20 17:01:01.046326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.306 "name": "Existed_Raid", 00:08:37.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.306 "strip_size_kb": 64, 00:08:37.306 "state": "configuring", 00:08:37.306 "raid_level": "concat", 00:08:37.306 "superblock": false, 00:08:37.306 "num_base_bdevs": 3, 00:08:37.306 "num_base_bdevs_discovered": 2, 00:08:37.306 "num_base_bdevs_operational": 3, 00:08:37.306 "base_bdevs_list": [ 00:08:37.306 { 00:08:37.306 "name": "BaseBdev1", 00:08:37.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.306 "is_configured": false, 00:08:37.306 "data_offset": 0, 00:08:37.306 "data_size": 0 00:08:37.306 }, 00:08:37.306 { 00:08:37.306 "name": "BaseBdev2", 00:08:37.306 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:37.306 "is_configured": true, 00:08:37.306 "data_offset": 0, 00:08:37.306 "data_size": 65536 00:08:37.306 }, 00:08:37.306 { 00:08:37.306 "name": "BaseBdev3", 00:08:37.306 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:37.306 "is_configured": true, 00:08:37.306 "data_offset": 0, 00:08:37.306 "data_size": 65536 00:08:37.306 } 00:08:37.306 ] 00:08:37.306 }' 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.306 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 [2024-11-20 17:01:01.556154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.874 "name": "Existed_Raid", 00:08:37.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.874 "strip_size_kb": 64, 00:08:37.874 "state": "configuring", 00:08:37.874 "raid_level": "concat", 00:08:37.874 "superblock": false, 00:08:37.874 "num_base_bdevs": 3, 00:08:37.874 "num_base_bdevs_discovered": 1, 00:08:37.874 "num_base_bdevs_operational": 3, 00:08:37.874 "base_bdevs_list": [ 00:08:37.874 { 00:08:37.874 "name": "BaseBdev1", 00:08:37.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.874 "is_configured": false, 00:08:37.874 "data_offset": 0, 00:08:37.874 "data_size": 0 00:08:37.874 }, 00:08:37.874 { 00:08:37.874 "name": null, 00:08:37.874 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:37.874 "is_configured": false, 00:08:37.874 "data_offset": 0, 00:08:37.874 "data_size": 65536 00:08:37.874 }, 00:08:37.874 { 00:08:37.874 "name": "BaseBdev3", 00:08:37.874 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:37.874 "is_configured": true, 00:08:37.874 "data_offset": 0, 00:08:37.874 "data_size": 65536 00:08:37.874 } 00:08:37.874 ] 00:08:37.874 }' 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.874 17:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 [2024-11-20 17:01:02.149124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.443 BaseBdev1 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 [ 00:08:38.443 { 00:08:38.443 "name": "BaseBdev1", 00:08:38.443 "aliases": [ 00:08:38.443 "284a0921-5c67-4900-9f1b-37d1d13cbc1f" 00:08:38.443 ], 00:08:38.443 "product_name": "Malloc disk", 00:08:38.443 "block_size": 512, 00:08:38.443 "num_blocks": 65536, 00:08:38.443 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:38.443 "assigned_rate_limits": { 00:08:38.443 "rw_ios_per_sec": 0, 00:08:38.443 "rw_mbytes_per_sec": 0, 00:08:38.443 "r_mbytes_per_sec": 0, 00:08:38.443 "w_mbytes_per_sec": 0 00:08:38.443 }, 00:08:38.443 "claimed": true, 00:08:38.443 "claim_type": "exclusive_write", 00:08:38.443 "zoned": false, 00:08:38.443 "supported_io_types": { 00:08:38.443 "read": true, 00:08:38.443 "write": true, 00:08:38.443 "unmap": true, 00:08:38.443 "flush": true, 00:08:38.443 "reset": true, 00:08:38.443 "nvme_admin": false, 00:08:38.443 "nvme_io": false, 00:08:38.443 "nvme_io_md": false, 00:08:38.443 "write_zeroes": true, 00:08:38.443 "zcopy": true, 00:08:38.443 "get_zone_info": false, 00:08:38.443 "zone_management": false, 00:08:38.443 "zone_append": false, 00:08:38.443 "compare": false, 00:08:38.443 "compare_and_write": false, 00:08:38.443 "abort": true, 00:08:38.443 "seek_hole": false, 00:08:38.443 "seek_data": false, 00:08:38.443 "copy": true, 00:08:38.443 "nvme_iov_md": false 00:08:38.443 }, 00:08:38.443 "memory_domains": [ 00:08:38.443 { 00:08:38.443 "dma_device_id": "system", 00:08:38.443 "dma_device_type": 1 00:08:38.443 }, 00:08:38.443 { 00:08:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.443 "dma_device_type": 2 00:08:38.443 } 00:08:38.443 ], 00:08:38.443 "driver_specific": {} 00:08:38.443 } 00:08:38.443 ] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.443 "name": "Existed_Raid", 00:08:38.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.443 "strip_size_kb": 64, 00:08:38.443 "state": "configuring", 00:08:38.443 "raid_level": "concat", 00:08:38.443 "superblock": false, 00:08:38.443 "num_base_bdevs": 3, 00:08:38.443 "num_base_bdevs_discovered": 2, 00:08:38.443 "num_base_bdevs_operational": 3, 00:08:38.443 "base_bdevs_list": [ 00:08:38.443 { 00:08:38.443 "name": "BaseBdev1", 00:08:38.443 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:38.443 "is_configured": true, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 }, 00:08:38.443 { 00:08:38.443 "name": null, 00:08:38.443 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:38.443 "is_configured": false, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 }, 00:08:38.443 { 00:08:38.443 "name": "BaseBdev3", 00:08:38.443 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:38.443 "is_configured": true, 00:08:38.443 "data_offset": 0, 00:08:38.443 "data_size": 65536 00:08:38.443 } 00:08:38.443 ] 00:08:38.443 }' 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.443 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 [2024-11-20 17:01:02.741425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.011 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.012 "name": "Existed_Raid", 00:08:39.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.012 "strip_size_kb": 64, 00:08:39.012 "state": "configuring", 00:08:39.012 "raid_level": "concat", 00:08:39.012 "superblock": false, 00:08:39.012 "num_base_bdevs": 3, 00:08:39.012 "num_base_bdevs_discovered": 1, 00:08:39.012 "num_base_bdevs_operational": 3, 00:08:39.012 "base_bdevs_list": [ 00:08:39.012 { 00:08:39.012 "name": "BaseBdev1", 00:08:39.012 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:39.012 "is_configured": true, 00:08:39.012 "data_offset": 0, 00:08:39.012 "data_size": 65536 00:08:39.012 }, 00:08:39.012 { 00:08:39.012 "name": null, 00:08:39.012 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:39.012 "is_configured": false, 00:08:39.012 "data_offset": 0, 00:08:39.012 "data_size": 65536 00:08:39.012 }, 00:08:39.012 { 00:08:39.012 "name": null, 00:08:39.012 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:39.012 "is_configured": false, 00:08:39.012 "data_offset": 0, 00:08:39.012 "data_size": 65536 00:08:39.012 } 00:08:39.012 ] 00:08:39.012 }' 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.012 17:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 [2024-11-20 17:01:03.289609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.580 "name": "Existed_Raid", 00:08:39.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.580 "strip_size_kb": 64, 00:08:39.580 "state": "configuring", 00:08:39.580 "raid_level": "concat", 00:08:39.580 "superblock": false, 00:08:39.580 "num_base_bdevs": 3, 00:08:39.580 "num_base_bdevs_discovered": 2, 00:08:39.580 "num_base_bdevs_operational": 3, 00:08:39.580 "base_bdevs_list": [ 00:08:39.580 { 00:08:39.580 "name": "BaseBdev1", 00:08:39.580 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:39.580 "is_configured": true, 00:08:39.580 "data_offset": 0, 00:08:39.580 "data_size": 65536 00:08:39.580 }, 00:08:39.580 { 00:08:39.580 "name": null, 00:08:39.580 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:39.580 "is_configured": false, 00:08:39.580 "data_offset": 0, 00:08:39.580 "data_size": 65536 00:08:39.580 }, 00:08:39.580 { 00:08:39.580 "name": "BaseBdev3", 00:08:39.580 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:39.580 "is_configured": true, 00:08:39.580 "data_offset": 0, 00:08:39.580 "data_size": 65536 00:08:39.580 } 00:08:39.580 ] 00:08:39.580 }' 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.580 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.159 [2024-11-20 17:01:03.869868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.159 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.160 17:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.436 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.436 "name": "Existed_Raid", 00:08:40.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.436 "strip_size_kb": 64, 00:08:40.436 "state": "configuring", 00:08:40.436 "raid_level": "concat", 00:08:40.436 "superblock": false, 00:08:40.436 "num_base_bdevs": 3, 00:08:40.436 "num_base_bdevs_discovered": 1, 00:08:40.436 "num_base_bdevs_operational": 3, 00:08:40.436 "base_bdevs_list": [ 00:08:40.436 { 00:08:40.436 "name": null, 00:08:40.436 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:40.436 "is_configured": false, 00:08:40.436 "data_offset": 0, 00:08:40.436 "data_size": 65536 00:08:40.436 }, 00:08:40.436 { 00:08:40.436 "name": null, 00:08:40.436 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:40.436 "is_configured": false, 00:08:40.436 "data_offset": 0, 00:08:40.436 "data_size": 65536 00:08:40.436 }, 00:08:40.436 { 00:08:40.436 "name": "BaseBdev3", 00:08:40.436 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:40.436 "is_configured": true, 00:08:40.436 "data_offset": 0, 00:08:40.436 "data_size": 65536 00:08:40.436 } 00:08:40.436 ] 00:08:40.436 }' 00:08:40.436 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.436 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.694 [2024-11-20 17:01:04.528424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.694 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.952 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.952 "name": "Existed_Raid", 00:08:40.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.952 "strip_size_kb": 64, 00:08:40.952 "state": "configuring", 00:08:40.952 "raid_level": "concat", 00:08:40.952 "superblock": false, 00:08:40.952 "num_base_bdevs": 3, 00:08:40.952 "num_base_bdevs_discovered": 2, 00:08:40.952 "num_base_bdevs_operational": 3, 00:08:40.952 "base_bdevs_list": [ 00:08:40.952 { 00:08:40.952 "name": null, 00:08:40.952 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:40.952 "is_configured": false, 00:08:40.952 "data_offset": 0, 00:08:40.952 "data_size": 65536 00:08:40.952 }, 00:08:40.952 { 00:08:40.952 "name": "BaseBdev2", 00:08:40.952 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:40.952 "is_configured": true, 00:08:40.952 "data_offset": 0, 00:08:40.952 "data_size": 65536 00:08:40.952 }, 00:08:40.952 { 00:08:40.952 "name": "BaseBdev3", 00:08:40.952 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:40.952 "is_configured": true, 00:08:40.952 "data_offset": 0, 00:08:40.952 "data_size": 65536 00:08:40.952 } 00:08:40.952 ] 00:08:40.952 }' 00:08:40.952 17:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.952 17:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.210 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.211 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.211 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.211 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.211 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 284a0921-5c67-4900-9f1b-37d1d13cbc1f 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 [2024-11-20 17:01:05.185293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:41.470 [2024-11-20 17:01:05.185336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:41.470 [2024-11-20 17:01:05.185349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:41.470 [2024-11-20 17:01:05.185636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:41.470 [2024-11-20 17:01:05.185845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:41.470 [2024-11-20 17:01:05.185876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:41.470 NewBaseBdev 00:08:41.470 [2024-11-20 17:01:05.186239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 [ 00:08:41.470 { 00:08:41.470 "name": "NewBaseBdev", 00:08:41.470 "aliases": [ 00:08:41.470 "284a0921-5c67-4900-9f1b-37d1d13cbc1f" 00:08:41.470 ], 00:08:41.470 "product_name": "Malloc disk", 00:08:41.470 "block_size": 512, 00:08:41.470 "num_blocks": 65536, 00:08:41.470 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:41.470 "assigned_rate_limits": { 00:08:41.470 "rw_ios_per_sec": 0, 00:08:41.470 "rw_mbytes_per_sec": 0, 00:08:41.470 "r_mbytes_per_sec": 0, 00:08:41.470 "w_mbytes_per_sec": 0 00:08:41.470 }, 00:08:41.470 "claimed": true, 00:08:41.470 "claim_type": "exclusive_write", 00:08:41.470 "zoned": false, 00:08:41.470 "supported_io_types": { 00:08:41.470 "read": true, 00:08:41.470 "write": true, 00:08:41.470 "unmap": true, 00:08:41.470 "flush": true, 00:08:41.470 "reset": true, 00:08:41.470 "nvme_admin": false, 00:08:41.470 "nvme_io": false, 00:08:41.470 "nvme_io_md": false, 00:08:41.470 "write_zeroes": true, 00:08:41.470 "zcopy": true, 00:08:41.470 "get_zone_info": false, 00:08:41.470 "zone_management": false, 00:08:41.470 "zone_append": false, 00:08:41.470 "compare": false, 00:08:41.470 "compare_and_write": false, 00:08:41.470 "abort": true, 00:08:41.470 "seek_hole": false, 00:08:41.470 "seek_data": false, 00:08:41.470 "copy": true, 00:08:41.470 "nvme_iov_md": false 00:08:41.470 }, 00:08:41.470 "memory_domains": [ 00:08:41.470 { 00:08:41.470 "dma_device_id": "system", 00:08:41.470 "dma_device_type": 1 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.470 "dma_device_type": 2 00:08:41.470 } 00:08:41.470 ], 00:08:41.470 "driver_specific": {} 00:08:41.470 } 00:08:41.470 ] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.470 "name": "Existed_Raid", 00:08:41.470 "uuid": "36351098-dea8-4a61-a736-0e79394e7bc7", 00:08:41.470 "strip_size_kb": 64, 00:08:41.470 "state": "online", 00:08:41.470 "raid_level": "concat", 00:08:41.470 "superblock": false, 00:08:41.470 "num_base_bdevs": 3, 00:08:41.470 "num_base_bdevs_discovered": 3, 00:08:41.470 "num_base_bdevs_operational": 3, 00:08:41.470 "base_bdevs_list": [ 00:08:41.470 { 00:08:41.470 "name": "NewBaseBdev", 00:08:41.470 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:41.470 "is_configured": true, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 65536 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "name": "BaseBdev2", 00:08:41.470 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:41.470 "is_configured": true, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 65536 00:08:41.470 }, 00:08:41.470 { 00:08:41.470 "name": "BaseBdev3", 00:08:41.470 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:41.470 "is_configured": true, 00:08:41.470 "data_offset": 0, 00:08:41.470 "data_size": 65536 00:08:41.470 } 00:08:41.470 ] 00:08:41.470 }' 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.470 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.039 [2024-11-20 17:01:05.765964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.039 "name": "Existed_Raid", 00:08:42.039 "aliases": [ 00:08:42.039 "36351098-dea8-4a61-a736-0e79394e7bc7" 00:08:42.039 ], 00:08:42.039 "product_name": "Raid Volume", 00:08:42.039 "block_size": 512, 00:08:42.039 "num_blocks": 196608, 00:08:42.039 "uuid": "36351098-dea8-4a61-a736-0e79394e7bc7", 00:08:42.039 "assigned_rate_limits": { 00:08:42.039 "rw_ios_per_sec": 0, 00:08:42.039 "rw_mbytes_per_sec": 0, 00:08:42.039 "r_mbytes_per_sec": 0, 00:08:42.039 "w_mbytes_per_sec": 0 00:08:42.039 }, 00:08:42.039 "claimed": false, 00:08:42.039 "zoned": false, 00:08:42.039 "supported_io_types": { 00:08:42.039 "read": true, 00:08:42.039 "write": true, 00:08:42.039 "unmap": true, 00:08:42.039 "flush": true, 00:08:42.039 "reset": true, 00:08:42.039 "nvme_admin": false, 00:08:42.039 "nvme_io": false, 00:08:42.039 "nvme_io_md": false, 00:08:42.039 "write_zeroes": true, 00:08:42.039 "zcopy": false, 00:08:42.039 "get_zone_info": false, 00:08:42.039 "zone_management": false, 00:08:42.039 "zone_append": false, 00:08:42.039 "compare": false, 00:08:42.039 "compare_and_write": false, 00:08:42.039 "abort": false, 00:08:42.039 "seek_hole": false, 00:08:42.039 "seek_data": false, 00:08:42.039 "copy": false, 00:08:42.039 "nvme_iov_md": false 00:08:42.039 }, 00:08:42.039 "memory_domains": [ 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 } 00:08:42.039 ], 00:08:42.039 "driver_specific": { 00:08:42.039 "raid": { 00:08:42.039 "uuid": "36351098-dea8-4a61-a736-0e79394e7bc7", 00:08:42.039 "strip_size_kb": 64, 00:08:42.039 "state": "online", 00:08:42.039 "raid_level": "concat", 00:08:42.039 "superblock": false, 00:08:42.039 "num_base_bdevs": 3, 00:08:42.039 "num_base_bdevs_discovered": 3, 00:08:42.039 "num_base_bdevs_operational": 3, 00:08:42.039 "base_bdevs_list": [ 00:08:42.039 { 00:08:42.039 "name": "NewBaseBdev", 00:08:42.039 "uuid": "284a0921-5c67-4900-9f1b-37d1d13cbc1f", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 0, 00:08:42.039 "data_size": 65536 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "name": "BaseBdev2", 00:08:42.039 "uuid": "8719e9a6-169b-44b2-abf2-3b78bc9ece12", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 0, 00:08:42.039 "data_size": 65536 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "name": "BaseBdev3", 00:08:42.039 "uuid": "599c5943-afbd-4d6e-bbed-3f7282283aa3", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 0, 00:08:42.039 "data_size": 65536 00:08:42.039 } 00:08:42.039 ] 00:08:42.039 } 00:08:42.039 } 00:08:42.039 }' 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:42.039 BaseBdev2 00:08:42.039 BaseBdev3' 00:08:42.039 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.299 17:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.299 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.299 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.299 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.300 [2024-11-20 17:01:06.081647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.300 [2024-11-20 17:01:06.081675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.300 [2024-11-20 17:01:06.081749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.300 [2024-11-20 17:01:06.081846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.300 [2024-11-20 17:01:06.081864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65436 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65436 ']' 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65436 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65436 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65436' 00:08:42.300 killing process with pid 65436 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65436 00:08:42.300 [2024-11-20 17:01:06.124793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.300 17:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65436 00:08:42.559 [2024-11-20 17:01:06.382529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.496 17:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:43.496 00:08:43.496 real 0m11.753s 00:08:43.496 user 0m19.642s 00:08:43.496 sys 0m1.552s 00:08:43.496 17:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.496 17:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.496 ************************************ 00:08:43.496 END TEST raid_state_function_test 00:08:43.496 ************************************ 00:08:43.754 17:01:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:43.754 17:01:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.754 17:01:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.754 17:01:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.754 ************************************ 00:08:43.754 START TEST raid_state_function_test_sb 00:08:43.754 ************************************ 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:43.754 Process raid pid: 66075 00:08:43.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.754 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66075 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66075' 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66075 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66075 ']' 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.755 17:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.755 [2024-11-20 17:01:07.493442] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:43.755 [2024-11-20 17:01:07.494221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.025 [2024-11-20 17:01:07.685184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.025 [2024-11-20 17:01:07.799409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.287 [2024-11-20 17:01:07.985416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.287 [2024-11-20 17:01:07.985457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 [2024-11-20 17:01:08.483020] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.855 [2024-11-20 17:01:08.483095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.855 [2024-11-20 17:01:08.483127] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.855 [2024-11-20 17:01:08.483157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.855 [2024-11-20 17:01:08.483166] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.855 [2024-11-20 17:01:08.483178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.855 "name": "Existed_Raid", 00:08:44.855 "uuid": "a6b96f29-73eb-4b61-ae55-87fcec53b6d0", 00:08:44.855 "strip_size_kb": 64, 00:08:44.855 "state": "configuring", 00:08:44.855 "raid_level": "concat", 00:08:44.855 "superblock": true, 00:08:44.855 "num_base_bdevs": 3, 00:08:44.855 "num_base_bdevs_discovered": 0, 00:08:44.855 "num_base_bdevs_operational": 3, 00:08:44.855 "base_bdevs_list": [ 00:08:44.855 { 00:08:44.855 "name": "BaseBdev1", 00:08:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.855 "is_configured": false, 00:08:44.855 "data_offset": 0, 00:08:44.855 "data_size": 0 00:08:44.855 }, 00:08:44.855 { 00:08:44.855 "name": "BaseBdev2", 00:08:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.855 "is_configured": false, 00:08:44.855 "data_offset": 0, 00:08:44.855 "data_size": 0 00:08:44.855 }, 00:08:44.855 { 00:08:44.855 "name": "BaseBdev3", 00:08:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.855 "is_configured": false, 00:08:44.856 "data_offset": 0, 00:08:44.856 "data_size": 0 00:08:44.856 } 00:08:44.856 ] 00:08:44.856 }' 00:08:44.856 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.856 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 [2024-11-20 17:01:08.963116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.115 [2024-11-20 17:01:08.963393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 [2024-11-20 17:01:08.971163] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.115 [2024-11-20 17:01:08.971225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.115 [2024-11-20 17:01:08.971238] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.115 [2024-11-20 17:01:08.971252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.115 [2024-11-20 17:01:08.971260] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.115 [2024-11-20 17:01:08.971272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.115 17:01:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 [2024-11-20 17:01:09.013410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.375 BaseBdev1 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.375 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.375 [ 00:08:45.375 { 00:08:45.375 "name": "BaseBdev1", 00:08:45.375 "aliases": [ 00:08:45.375 "dc6b4eb5-907c-44a1-998c-c65ada033954" 00:08:45.375 ], 00:08:45.375 "product_name": "Malloc disk", 00:08:45.375 "block_size": 512, 00:08:45.375 "num_blocks": 65536, 00:08:45.375 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:45.375 "assigned_rate_limits": { 00:08:45.375 "rw_ios_per_sec": 0, 00:08:45.375 "rw_mbytes_per_sec": 0, 00:08:45.375 "r_mbytes_per_sec": 0, 00:08:45.375 "w_mbytes_per_sec": 0 00:08:45.375 }, 00:08:45.375 "claimed": true, 00:08:45.375 "claim_type": "exclusive_write", 00:08:45.375 "zoned": false, 00:08:45.375 "supported_io_types": { 00:08:45.375 "read": true, 00:08:45.375 "write": true, 00:08:45.375 "unmap": true, 00:08:45.375 "flush": true, 00:08:45.375 "reset": true, 00:08:45.375 "nvme_admin": false, 00:08:45.375 "nvme_io": false, 00:08:45.375 "nvme_io_md": false, 00:08:45.375 "write_zeroes": true, 00:08:45.375 "zcopy": true, 00:08:45.375 "get_zone_info": false, 00:08:45.375 "zone_management": false, 00:08:45.375 "zone_append": false, 00:08:45.375 "compare": false, 00:08:45.375 "compare_and_write": false, 00:08:45.375 "abort": true, 00:08:45.375 "seek_hole": false, 00:08:45.375 "seek_data": false, 00:08:45.375 "copy": true, 00:08:45.375 "nvme_iov_md": false 00:08:45.375 }, 00:08:45.375 "memory_domains": [ 00:08:45.375 { 00:08:45.375 "dma_device_id": "system", 00:08:45.375 "dma_device_type": 1 00:08:45.375 }, 00:08:45.375 { 00:08:45.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.376 "dma_device_type": 2 00:08:45.376 } 00:08:45.376 ], 00:08:45.376 "driver_specific": {} 00:08:45.376 } 00:08:45.376 ] 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.376 "name": "Existed_Raid", 00:08:45.376 "uuid": "028054b4-5f55-4a8f-a7cf-1e2d6fa9d46f", 00:08:45.376 "strip_size_kb": 64, 00:08:45.376 "state": "configuring", 00:08:45.376 "raid_level": "concat", 00:08:45.376 "superblock": true, 00:08:45.376 "num_base_bdevs": 3, 00:08:45.376 "num_base_bdevs_discovered": 1, 00:08:45.376 "num_base_bdevs_operational": 3, 00:08:45.376 "base_bdevs_list": [ 00:08:45.376 { 00:08:45.376 "name": "BaseBdev1", 00:08:45.376 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:45.376 "is_configured": true, 00:08:45.376 "data_offset": 2048, 00:08:45.376 "data_size": 63488 00:08:45.376 }, 00:08:45.376 { 00:08:45.376 "name": "BaseBdev2", 00:08:45.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.376 "is_configured": false, 00:08:45.376 "data_offset": 0, 00:08:45.376 "data_size": 0 00:08:45.376 }, 00:08:45.376 { 00:08:45.376 "name": "BaseBdev3", 00:08:45.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.376 "is_configured": false, 00:08:45.376 "data_offset": 0, 00:08:45.376 "data_size": 0 00:08:45.376 } 00:08:45.376 ] 00:08:45.376 }' 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.376 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.024 [2024-11-20 17:01:09.569623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.024 [2024-11-20 17:01:09.569846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.024 [2024-11-20 17:01:09.577675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.024 [2024-11-20 17:01:09.580286] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.024 [2024-11-20 17:01:09.580471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.024 [2024-11-20 17:01:09.580503] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.024 [2024-11-20 17:01:09.580519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.024 "name": "Existed_Raid", 00:08:46.024 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:46.024 "strip_size_kb": 64, 00:08:46.024 "state": "configuring", 00:08:46.024 "raid_level": "concat", 00:08:46.024 "superblock": true, 00:08:46.024 "num_base_bdevs": 3, 00:08:46.024 "num_base_bdevs_discovered": 1, 00:08:46.024 "num_base_bdevs_operational": 3, 00:08:46.024 "base_bdevs_list": [ 00:08:46.024 { 00:08:46.024 "name": "BaseBdev1", 00:08:46.024 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:46.024 "is_configured": true, 00:08:46.024 "data_offset": 2048, 00:08:46.024 "data_size": 63488 00:08:46.024 }, 00:08:46.024 { 00:08:46.024 "name": "BaseBdev2", 00:08:46.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.024 "is_configured": false, 00:08:46.024 "data_offset": 0, 00:08:46.024 "data_size": 0 00:08:46.024 }, 00:08:46.024 { 00:08:46.024 "name": "BaseBdev3", 00:08:46.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.024 "is_configured": false, 00:08:46.024 "data_offset": 0, 00:08:46.024 "data_size": 0 00:08:46.024 } 00:08:46.024 ] 00:08:46.024 }' 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.024 17:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 [2024-11-20 17:01:10.139140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.284 BaseBdev2 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 [ 00:08:46.543 { 00:08:46.543 "name": "BaseBdev2", 00:08:46.543 "aliases": [ 00:08:46.543 "96610306-f7af-4467-aebe-b925351a85b1" 00:08:46.543 ], 00:08:46.543 "product_name": "Malloc disk", 00:08:46.543 "block_size": 512, 00:08:46.543 "num_blocks": 65536, 00:08:46.543 "uuid": "96610306-f7af-4467-aebe-b925351a85b1", 00:08:46.543 "assigned_rate_limits": { 00:08:46.543 "rw_ios_per_sec": 0, 00:08:46.543 "rw_mbytes_per_sec": 0, 00:08:46.543 "r_mbytes_per_sec": 0, 00:08:46.543 "w_mbytes_per_sec": 0 00:08:46.543 }, 00:08:46.543 "claimed": true, 00:08:46.543 "claim_type": "exclusive_write", 00:08:46.543 "zoned": false, 00:08:46.543 "supported_io_types": { 00:08:46.543 "read": true, 00:08:46.543 "write": true, 00:08:46.543 "unmap": true, 00:08:46.543 "flush": true, 00:08:46.543 "reset": true, 00:08:46.543 "nvme_admin": false, 00:08:46.543 "nvme_io": false, 00:08:46.543 "nvme_io_md": false, 00:08:46.543 "write_zeroes": true, 00:08:46.543 "zcopy": true, 00:08:46.543 "get_zone_info": false, 00:08:46.543 "zone_management": false, 00:08:46.543 "zone_append": false, 00:08:46.543 "compare": false, 00:08:46.543 "compare_and_write": false, 00:08:46.543 "abort": true, 00:08:46.543 "seek_hole": false, 00:08:46.543 "seek_data": false, 00:08:46.543 "copy": true, 00:08:46.543 "nvme_iov_md": false 00:08:46.543 }, 00:08:46.543 "memory_domains": [ 00:08:46.543 { 00:08:46.543 "dma_device_id": "system", 00:08:46.543 "dma_device_type": 1 00:08:46.543 }, 00:08:46.543 { 00:08:46.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.543 "dma_device_type": 2 00:08:46.543 } 00:08:46.543 ], 00:08:46.543 "driver_specific": {} 00:08:46.543 } 00:08:46.543 ] 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.543 "name": "Existed_Raid", 00:08:46.543 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:46.543 "strip_size_kb": 64, 00:08:46.543 "state": "configuring", 00:08:46.543 "raid_level": "concat", 00:08:46.543 "superblock": true, 00:08:46.543 "num_base_bdevs": 3, 00:08:46.543 "num_base_bdevs_discovered": 2, 00:08:46.543 "num_base_bdevs_operational": 3, 00:08:46.543 "base_bdevs_list": [ 00:08:46.543 { 00:08:46.543 "name": "BaseBdev1", 00:08:46.543 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:46.543 "is_configured": true, 00:08:46.543 "data_offset": 2048, 00:08:46.543 "data_size": 63488 00:08:46.543 }, 00:08:46.543 { 00:08:46.543 "name": "BaseBdev2", 00:08:46.543 "uuid": "96610306-f7af-4467-aebe-b925351a85b1", 00:08:46.543 "is_configured": true, 00:08:46.543 "data_offset": 2048, 00:08:46.543 "data_size": 63488 00:08:46.543 }, 00:08:46.543 { 00:08:46.543 "name": "BaseBdev3", 00:08:46.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.543 "is_configured": false, 00:08:46.543 "data_offset": 0, 00:08:46.543 "data_size": 0 00:08:46.543 } 00:08:46.543 ] 00:08:46.543 }' 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.543 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 [2024-11-20 17:01:10.748434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.111 [2024-11-20 17:01:10.749068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.111 [2024-11-20 17:01:10.749267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.111 BaseBdev3 00:08:47.111 [2024-11-20 17:01:10.749642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.111 [2024-11-20 17:01:10.749948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.111 [2024-11-20 17:01:10.749966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.111 [2024-11-20 17:01:10.750148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 [ 00:08:47.111 { 00:08:47.111 "name": "BaseBdev3", 00:08:47.111 "aliases": [ 00:08:47.111 "c0e40b12-0af8-434d-a220-d7099f688f7c" 00:08:47.111 ], 00:08:47.111 "product_name": "Malloc disk", 00:08:47.111 "block_size": 512, 00:08:47.111 "num_blocks": 65536, 00:08:47.111 "uuid": "c0e40b12-0af8-434d-a220-d7099f688f7c", 00:08:47.111 "assigned_rate_limits": { 00:08:47.111 "rw_ios_per_sec": 0, 00:08:47.111 "rw_mbytes_per_sec": 0, 00:08:47.111 "r_mbytes_per_sec": 0, 00:08:47.111 "w_mbytes_per_sec": 0 00:08:47.111 }, 00:08:47.111 "claimed": true, 00:08:47.111 "claim_type": "exclusive_write", 00:08:47.111 "zoned": false, 00:08:47.111 "supported_io_types": { 00:08:47.111 "read": true, 00:08:47.111 "write": true, 00:08:47.111 "unmap": true, 00:08:47.111 "flush": true, 00:08:47.111 "reset": true, 00:08:47.111 "nvme_admin": false, 00:08:47.111 "nvme_io": false, 00:08:47.111 "nvme_io_md": false, 00:08:47.111 "write_zeroes": true, 00:08:47.111 "zcopy": true, 00:08:47.111 "get_zone_info": false, 00:08:47.111 "zone_management": false, 00:08:47.111 "zone_append": false, 00:08:47.111 "compare": false, 00:08:47.111 "compare_and_write": false, 00:08:47.111 "abort": true, 00:08:47.111 "seek_hole": false, 00:08:47.111 "seek_data": false, 00:08:47.111 "copy": true, 00:08:47.111 "nvme_iov_md": false 00:08:47.111 }, 00:08:47.111 "memory_domains": [ 00:08:47.111 { 00:08:47.111 "dma_device_id": "system", 00:08:47.111 "dma_device_type": 1 00:08:47.111 }, 00:08:47.111 { 00:08:47.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.111 "dma_device_type": 2 00:08:47.111 } 00:08:47.111 ], 00:08:47.111 "driver_specific": {} 00:08:47.111 } 00:08:47.111 ] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.111 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.111 "name": "Existed_Raid", 00:08:47.111 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:47.111 "strip_size_kb": 64, 00:08:47.111 "state": "online", 00:08:47.112 "raid_level": "concat", 00:08:47.112 "superblock": true, 00:08:47.112 "num_base_bdevs": 3, 00:08:47.112 "num_base_bdevs_discovered": 3, 00:08:47.112 "num_base_bdevs_operational": 3, 00:08:47.112 "base_bdevs_list": [ 00:08:47.112 { 00:08:47.112 "name": "BaseBdev1", 00:08:47.112 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:47.112 "is_configured": true, 00:08:47.112 "data_offset": 2048, 00:08:47.112 "data_size": 63488 00:08:47.112 }, 00:08:47.112 { 00:08:47.112 "name": "BaseBdev2", 00:08:47.112 "uuid": "96610306-f7af-4467-aebe-b925351a85b1", 00:08:47.112 "is_configured": true, 00:08:47.112 "data_offset": 2048, 00:08:47.112 "data_size": 63488 00:08:47.112 }, 00:08:47.112 { 00:08:47.112 "name": "BaseBdev3", 00:08:47.112 "uuid": "c0e40b12-0af8-434d-a220-d7099f688f7c", 00:08:47.112 "is_configured": true, 00:08:47.112 "data_offset": 2048, 00:08:47.112 "data_size": 63488 00:08:47.112 } 00:08:47.112 ] 00:08:47.112 }' 00:08:47.112 17:01:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.112 17:01:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [2024-11-20 17:01:11.305087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.680 "name": "Existed_Raid", 00:08:47.680 "aliases": [ 00:08:47.680 "d577c86c-73c8-4205-8447-cf14497d4f4e" 00:08:47.680 ], 00:08:47.680 "product_name": "Raid Volume", 00:08:47.680 "block_size": 512, 00:08:47.680 "num_blocks": 190464, 00:08:47.680 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:47.680 "assigned_rate_limits": { 00:08:47.680 "rw_ios_per_sec": 0, 00:08:47.680 "rw_mbytes_per_sec": 0, 00:08:47.680 "r_mbytes_per_sec": 0, 00:08:47.680 "w_mbytes_per_sec": 0 00:08:47.680 }, 00:08:47.680 "claimed": false, 00:08:47.680 "zoned": false, 00:08:47.680 "supported_io_types": { 00:08:47.680 "read": true, 00:08:47.680 "write": true, 00:08:47.680 "unmap": true, 00:08:47.680 "flush": true, 00:08:47.680 "reset": true, 00:08:47.680 "nvme_admin": false, 00:08:47.680 "nvme_io": false, 00:08:47.680 "nvme_io_md": false, 00:08:47.680 "write_zeroes": true, 00:08:47.680 "zcopy": false, 00:08:47.680 "get_zone_info": false, 00:08:47.680 "zone_management": false, 00:08:47.680 "zone_append": false, 00:08:47.680 "compare": false, 00:08:47.680 "compare_and_write": false, 00:08:47.680 "abort": false, 00:08:47.680 "seek_hole": false, 00:08:47.680 "seek_data": false, 00:08:47.680 "copy": false, 00:08:47.680 "nvme_iov_md": false 00:08:47.680 }, 00:08:47.680 "memory_domains": [ 00:08:47.680 { 00:08:47.680 "dma_device_id": "system", 00:08:47.680 "dma_device_type": 1 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.680 "dma_device_type": 2 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "dma_device_id": "system", 00:08:47.680 "dma_device_type": 1 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.680 "dma_device_type": 2 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "dma_device_id": "system", 00:08:47.680 "dma_device_type": 1 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.680 "dma_device_type": 2 00:08:47.680 } 00:08:47.680 ], 00:08:47.680 "driver_specific": { 00:08:47.680 "raid": { 00:08:47.680 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:47.680 "strip_size_kb": 64, 00:08:47.680 "state": "online", 00:08:47.680 "raid_level": "concat", 00:08:47.680 "superblock": true, 00:08:47.680 "num_base_bdevs": 3, 00:08:47.680 "num_base_bdevs_discovered": 3, 00:08:47.680 "num_base_bdevs_operational": 3, 00:08:47.680 "base_bdevs_list": [ 00:08:47.680 { 00:08:47.680 "name": "BaseBdev1", 00:08:47.680 "uuid": "dc6b4eb5-907c-44a1-998c-c65ada033954", 00:08:47.680 "is_configured": true, 00:08:47.680 "data_offset": 2048, 00:08:47.680 "data_size": 63488 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "name": "BaseBdev2", 00:08:47.680 "uuid": "96610306-f7af-4467-aebe-b925351a85b1", 00:08:47.680 "is_configured": true, 00:08:47.680 "data_offset": 2048, 00:08:47.680 "data_size": 63488 00:08:47.680 }, 00:08:47.680 { 00:08:47.680 "name": "BaseBdev3", 00:08:47.680 "uuid": "c0e40b12-0af8-434d-a220-d7099f688f7c", 00:08:47.680 "is_configured": true, 00:08:47.680 "data_offset": 2048, 00:08:47.680 "data_size": 63488 00:08:47.680 } 00:08:47.680 ] 00:08:47.680 } 00:08:47.680 } 00:08:47.680 }' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.680 BaseBdev2 00:08:47.680 BaseBdev3' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.680 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.681 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.681 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.681 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.681 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.939 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 [2024-11-20 17:01:11.624846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.940 [2024-11-20 17:01:11.624880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.940 [2024-11-20 17:01:11.624965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.940 "name": "Existed_Raid", 00:08:47.940 "uuid": "d577c86c-73c8-4205-8447-cf14497d4f4e", 00:08:47.940 "strip_size_kb": 64, 00:08:47.940 "state": "offline", 00:08:47.940 "raid_level": "concat", 00:08:47.940 "superblock": true, 00:08:47.940 "num_base_bdevs": 3, 00:08:47.940 "num_base_bdevs_discovered": 2, 00:08:47.940 "num_base_bdevs_operational": 2, 00:08:47.940 "base_bdevs_list": [ 00:08:47.940 { 00:08:47.940 "name": null, 00:08:47.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.940 "is_configured": false, 00:08:47.940 "data_offset": 0, 00:08:47.940 "data_size": 63488 00:08:47.940 }, 00:08:47.940 { 00:08:47.940 "name": "BaseBdev2", 00:08:47.940 "uuid": "96610306-f7af-4467-aebe-b925351a85b1", 00:08:47.940 "is_configured": true, 00:08:47.940 "data_offset": 2048, 00:08:47.940 "data_size": 63488 00:08:47.940 }, 00:08:47.940 { 00:08:47.940 "name": "BaseBdev3", 00:08:47.940 "uuid": "c0e40b12-0af8-434d-a220-d7099f688f7c", 00:08:47.940 "is_configured": true, 00:08:47.940 "data_offset": 2048, 00:08:47.940 "data_size": 63488 00:08:47.940 } 00:08:47.940 ] 00:08:47.940 }' 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.940 17:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.508 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.508 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.508 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.509 [2024-11-20 17:01:12.287433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.509 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.768 [2024-11-20 17:01:12.430642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.768 [2024-11-20 17:01:12.430698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.768 BaseBdev2 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.768 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.769 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.769 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.769 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.769 [ 00:08:48.769 { 00:08:48.769 "name": "BaseBdev2", 00:08:48.769 "aliases": [ 00:08:48.769 "94ec919a-9643-418e-be30-2459bbfd7acc" 00:08:48.769 ], 00:08:48.769 "product_name": "Malloc disk", 00:08:48.769 "block_size": 512, 00:08:48.769 "num_blocks": 65536, 00:08:48.769 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:48.769 "assigned_rate_limits": { 00:08:48.769 "rw_ios_per_sec": 0, 00:08:48.769 "rw_mbytes_per_sec": 0, 00:08:48.769 "r_mbytes_per_sec": 0, 00:08:48.769 "w_mbytes_per_sec": 0 00:08:48.769 }, 00:08:48.769 "claimed": false, 00:08:48.769 "zoned": false, 00:08:48.769 "supported_io_types": { 00:08:48.769 "read": true, 00:08:49.029 "write": true, 00:08:49.029 "unmap": true, 00:08:49.029 "flush": true, 00:08:49.029 "reset": true, 00:08:49.029 "nvme_admin": false, 00:08:49.029 "nvme_io": false, 00:08:49.029 "nvme_io_md": false, 00:08:49.029 "write_zeroes": true, 00:08:49.029 "zcopy": true, 00:08:49.029 "get_zone_info": false, 00:08:49.029 "zone_management": false, 00:08:49.029 "zone_append": false, 00:08:49.029 "compare": false, 00:08:49.029 "compare_and_write": false, 00:08:49.029 "abort": true, 00:08:49.029 "seek_hole": false, 00:08:49.029 "seek_data": false, 00:08:49.029 "copy": true, 00:08:49.029 "nvme_iov_md": false 00:08:49.029 }, 00:08:49.029 "memory_domains": [ 00:08:49.029 { 00:08:49.029 "dma_device_id": "system", 00:08:49.029 "dma_device_type": 1 00:08:49.029 }, 00:08:49.029 { 00:08:49.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.029 "dma_device_type": 2 00:08:49.029 } 00:08:49.029 ], 00:08:49.029 "driver_specific": {} 00:08:49.029 } 00:08:49.029 ] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 BaseBdev3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 [ 00:08:49.029 { 00:08:49.029 "name": "BaseBdev3", 00:08:49.029 "aliases": [ 00:08:49.029 "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6" 00:08:49.029 ], 00:08:49.029 "product_name": "Malloc disk", 00:08:49.029 "block_size": 512, 00:08:49.029 "num_blocks": 65536, 00:08:49.029 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:49.029 "assigned_rate_limits": { 00:08:49.029 "rw_ios_per_sec": 0, 00:08:49.029 "rw_mbytes_per_sec": 0, 00:08:49.029 "r_mbytes_per_sec": 0, 00:08:49.029 "w_mbytes_per_sec": 0 00:08:49.029 }, 00:08:49.029 "claimed": false, 00:08:49.029 "zoned": false, 00:08:49.029 "supported_io_types": { 00:08:49.029 "read": true, 00:08:49.029 "write": true, 00:08:49.029 "unmap": true, 00:08:49.029 "flush": true, 00:08:49.029 "reset": true, 00:08:49.029 "nvme_admin": false, 00:08:49.029 "nvme_io": false, 00:08:49.029 "nvme_io_md": false, 00:08:49.029 "write_zeroes": true, 00:08:49.029 "zcopy": true, 00:08:49.029 "get_zone_info": false, 00:08:49.029 "zone_management": false, 00:08:49.029 "zone_append": false, 00:08:49.029 "compare": false, 00:08:49.029 "compare_and_write": false, 00:08:49.029 "abort": true, 00:08:49.029 "seek_hole": false, 00:08:49.029 "seek_data": false, 00:08:49.029 "copy": true, 00:08:49.029 "nvme_iov_md": false 00:08:49.029 }, 00:08:49.029 "memory_domains": [ 00:08:49.029 { 00:08:49.029 "dma_device_id": "system", 00:08:49.029 "dma_device_type": 1 00:08:49.029 }, 00:08:49.029 { 00:08:49.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.029 "dma_device_type": 2 00:08:49.029 } 00:08:49.029 ], 00:08:49.029 "driver_specific": {} 00:08:49.029 } 00:08:49.029 ] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 [2024-11-20 17:01:12.727904] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.029 [2024-11-20 17:01:12.727955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.029 [2024-11-20 17:01:12.727986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.029 [2024-11-20 17:01:12.730453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.029 "name": "Existed_Raid", 00:08:49.029 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:49.029 "strip_size_kb": 64, 00:08:49.029 "state": "configuring", 00:08:49.029 "raid_level": "concat", 00:08:49.029 "superblock": true, 00:08:49.029 "num_base_bdevs": 3, 00:08:49.029 "num_base_bdevs_discovered": 2, 00:08:49.029 "num_base_bdevs_operational": 3, 00:08:49.029 "base_bdevs_list": [ 00:08:49.029 { 00:08:49.029 "name": "BaseBdev1", 00:08:49.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.029 "is_configured": false, 00:08:49.029 "data_offset": 0, 00:08:49.029 "data_size": 0 00:08:49.029 }, 00:08:49.029 { 00:08:49.029 "name": "BaseBdev2", 00:08:49.029 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:49.029 "is_configured": true, 00:08:49.029 "data_offset": 2048, 00:08:49.029 "data_size": 63488 00:08:49.029 }, 00:08:49.029 { 00:08:49.029 "name": "BaseBdev3", 00:08:49.029 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:49.029 "is_configured": true, 00:08:49.029 "data_offset": 2048, 00:08:49.029 "data_size": 63488 00:08:49.029 } 00:08:49.029 ] 00:08:49.029 }' 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.029 17:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.598 [2024-11-20 17:01:13.256187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.598 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.598 "name": "Existed_Raid", 00:08:49.598 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:49.598 "strip_size_kb": 64, 00:08:49.598 "state": "configuring", 00:08:49.598 "raid_level": "concat", 00:08:49.598 "superblock": true, 00:08:49.598 "num_base_bdevs": 3, 00:08:49.598 "num_base_bdevs_discovered": 1, 00:08:49.598 "num_base_bdevs_operational": 3, 00:08:49.598 "base_bdevs_list": [ 00:08:49.598 { 00:08:49.598 "name": "BaseBdev1", 00:08:49.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.598 "is_configured": false, 00:08:49.598 "data_offset": 0, 00:08:49.598 "data_size": 0 00:08:49.598 }, 00:08:49.598 { 00:08:49.598 "name": null, 00:08:49.598 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:49.598 "is_configured": false, 00:08:49.598 "data_offset": 0, 00:08:49.598 "data_size": 63488 00:08:49.598 }, 00:08:49.598 { 00:08:49.598 "name": "BaseBdev3", 00:08:49.598 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:49.598 "is_configured": true, 00:08:49.598 "data_offset": 2048, 00:08:49.598 "data_size": 63488 00:08:49.599 } 00:08:49.599 ] 00:08:49.599 }' 00:08:49.599 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.599 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-20 17:01:13.904333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.167 BaseBdev1 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.167 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [ 00:08:50.167 { 00:08:50.167 "name": "BaseBdev1", 00:08:50.167 "aliases": [ 00:08:50.167 "e8f141c0-d3c8-4e1d-b8fe-b7982c995983" 00:08:50.167 ], 00:08:50.167 "product_name": "Malloc disk", 00:08:50.167 "block_size": 512, 00:08:50.167 "num_blocks": 65536, 00:08:50.167 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:50.167 "assigned_rate_limits": { 00:08:50.167 "rw_ios_per_sec": 0, 00:08:50.167 "rw_mbytes_per_sec": 0, 00:08:50.167 "r_mbytes_per_sec": 0, 00:08:50.167 "w_mbytes_per_sec": 0 00:08:50.167 }, 00:08:50.167 "claimed": true, 00:08:50.167 "claim_type": "exclusive_write", 00:08:50.167 "zoned": false, 00:08:50.167 "supported_io_types": { 00:08:50.167 "read": true, 00:08:50.167 "write": true, 00:08:50.167 "unmap": true, 00:08:50.167 "flush": true, 00:08:50.167 "reset": true, 00:08:50.167 "nvme_admin": false, 00:08:50.167 "nvme_io": false, 00:08:50.167 "nvme_io_md": false, 00:08:50.167 "write_zeroes": true, 00:08:50.167 "zcopy": true, 00:08:50.167 "get_zone_info": false, 00:08:50.167 "zone_management": false, 00:08:50.167 "zone_append": false, 00:08:50.167 "compare": false, 00:08:50.167 "compare_and_write": false, 00:08:50.167 "abort": true, 00:08:50.167 "seek_hole": false, 00:08:50.167 "seek_data": false, 00:08:50.167 "copy": true, 00:08:50.167 "nvme_iov_md": false 00:08:50.167 }, 00:08:50.168 "memory_domains": [ 00:08:50.168 { 00:08:50.168 "dma_device_id": "system", 00:08:50.168 "dma_device_type": 1 00:08:50.168 }, 00:08:50.168 { 00:08:50.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.168 "dma_device_type": 2 00:08:50.168 } 00:08:50.168 ], 00:08:50.168 "driver_specific": {} 00:08:50.168 } 00:08:50.168 ] 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.168 "name": "Existed_Raid", 00:08:50.168 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:50.168 "strip_size_kb": 64, 00:08:50.168 "state": "configuring", 00:08:50.168 "raid_level": "concat", 00:08:50.168 "superblock": true, 00:08:50.168 "num_base_bdevs": 3, 00:08:50.168 "num_base_bdevs_discovered": 2, 00:08:50.168 "num_base_bdevs_operational": 3, 00:08:50.168 "base_bdevs_list": [ 00:08:50.168 { 00:08:50.168 "name": "BaseBdev1", 00:08:50.168 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:50.168 "is_configured": true, 00:08:50.168 "data_offset": 2048, 00:08:50.168 "data_size": 63488 00:08:50.168 }, 00:08:50.168 { 00:08:50.168 "name": null, 00:08:50.168 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:50.168 "is_configured": false, 00:08:50.168 "data_offset": 0, 00:08:50.168 "data_size": 63488 00:08:50.168 }, 00:08:50.168 { 00:08:50.168 "name": "BaseBdev3", 00:08:50.168 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:50.168 "is_configured": true, 00:08:50.168 "data_offset": 2048, 00:08:50.168 "data_size": 63488 00:08:50.168 } 00:08:50.168 ] 00:08:50.168 }' 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.168 17:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.736 [2024-11-20 17:01:14.524609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.736 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.737 "name": "Existed_Raid", 00:08:50.737 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:50.737 "strip_size_kb": 64, 00:08:50.737 "state": "configuring", 00:08:50.737 "raid_level": "concat", 00:08:50.737 "superblock": true, 00:08:50.737 "num_base_bdevs": 3, 00:08:50.737 "num_base_bdevs_discovered": 1, 00:08:50.737 "num_base_bdevs_operational": 3, 00:08:50.737 "base_bdevs_list": [ 00:08:50.737 { 00:08:50.737 "name": "BaseBdev1", 00:08:50.737 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:50.737 "is_configured": true, 00:08:50.737 "data_offset": 2048, 00:08:50.737 "data_size": 63488 00:08:50.737 }, 00:08:50.737 { 00:08:50.737 "name": null, 00:08:50.737 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:50.737 "is_configured": false, 00:08:50.737 "data_offset": 0, 00:08:50.737 "data_size": 63488 00:08:50.737 }, 00:08:50.737 { 00:08:50.737 "name": null, 00:08:50.737 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:50.737 "is_configured": false, 00:08:50.737 "data_offset": 0, 00:08:50.737 "data_size": 63488 00:08:50.737 } 00:08:50.737 ] 00:08:50.737 }' 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.737 17:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 [2024-11-20 17:01:15.108806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.305 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.305 "name": "Existed_Raid", 00:08:51.305 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:51.305 "strip_size_kb": 64, 00:08:51.305 "state": "configuring", 00:08:51.305 "raid_level": "concat", 00:08:51.305 "superblock": true, 00:08:51.305 "num_base_bdevs": 3, 00:08:51.305 "num_base_bdevs_discovered": 2, 00:08:51.305 "num_base_bdevs_operational": 3, 00:08:51.305 "base_bdevs_list": [ 00:08:51.305 { 00:08:51.305 "name": "BaseBdev1", 00:08:51.305 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:51.305 "is_configured": true, 00:08:51.305 "data_offset": 2048, 00:08:51.305 "data_size": 63488 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "name": null, 00:08:51.305 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:51.305 "is_configured": false, 00:08:51.305 "data_offset": 0, 00:08:51.305 "data_size": 63488 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "name": "BaseBdev3", 00:08:51.305 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:51.305 "is_configured": true, 00:08:51.305 "data_offset": 2048, 00:08:51.305 "data_size": 63488 00:08:51.305 } 00:08:51.305 ] 00:08:51.305 }' 00:08:51.306 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.306 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.873 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.873 [2024-11-20 17:01:15.677032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.132 "name": "Existed_Raid", 00:08:52.132 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:52.132 "strip_size_kb": 64, 00:08:52.132 "state": "configuring", 00:08:52.132 "raid_level": "concat", 00:08:52.132 "superblock": true, 00:08:52.132 "num_base_bdevs": 3, 00:08:52.132 "num_base_bdevs_discovered": 1, 00:08:52.132 "num_base_bdevs_operational": 3, 00:08:52.132 "base_bdevs_list": [ 00:08:52.132 { 00:08:52.132 "name": null, 00:08:52.132 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:52.132 "is_configured": false, 00:08:52.132 "data_offset": 0, 00:08:52.132 "data_size": 63488 00:08:52.132 }, 00:08:52.132 { 00:08:52.132 "name": null, 00:08:52.132 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:52.132 "is_configured": false, 00:08:52.132 "data_offset": 0, 00:08:52.132 "data_size": 63488 00:08:52.132 }, 00:08:52.132 { 00:08:52.132 "name": "BaseBdev3", 00:08:52.132 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:52.132 "is_configured": true, 00:08:52.132 "data_offset": 2048, 00:08:52.132 "data_size": 63488 00:08:52.132 } 00:08:52.132 ] 00:08:52.132 }' 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.132 17:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.700 [2024-11-20 17:01:16.331620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.700 "name": "Existed_Raid", 00:08:52.700 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:52.700 "strip_size_kb": 64, 00:08:52.700 "state": "configuring", 00:08:52.700 "raid_level": "concat", 00:08:52.700 "superblock": true, 00:08:52.700 "num_base_bdevs": 3, 00:08:52.700 "num_base_bdevs_discovered": 2, 00:08:52.700 "num_base_bdevs_operational": 3, 00:08:52.700 "base_bdevs_list": [ 00:08:52.700 { 00:08:52.700 "name": null, 00:08:52.700 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:52.700 "is_configured": false, 00:08:52.700 "data_offset": 0, 00:08:52.700 "data_size": 63488 00:08:52.700 }, 00:08:52.700 { 00:08:52.700 "name": "BaseBdev2", 00:08:52.700 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:52.700 "is_configured": true, 00:08:52.700 "data_offset": 2048, 00:08:52.700 "data_size": 63488 00:08:52.700 }, 00:08:52.700 { 00:08:52.700 "name": "BaseBdev3", 00:08:52.700 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:52.700 "is_configured": true, 00:08:52.700 "data_offset": 2048, 00:08:52.700 "data_size": 63488 00:08:52.700 } 00:08:52.700 ] 00:08:52.700 }' 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.700 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e8f141c0-d3c8-4e1d-b8fe-b7982c995983 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 [2024-11-20 17:01:17.003020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:53.269 [2024-11-20 17:01:17.003267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.269 [2024-11-20 17:01:17.003291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.269 [2024-11-20 17:01:17.003614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:53.269 NewBaseBdev 00:08:53.269 [2024-11-20 17:01:17.003812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.269 [2024-11-20 17:01:17.003828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:53.269 [2024-11-20 17:01:17.004001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 [ 00:08:53.269 { 00:08:53.269 "name": "NewBaseBdev", 00:08:53.269 "aliases": [ 00:08:53.269 "e8f141c0-d3c8-4e1d-b8fe-b7982c995983" 00:08:53.269 ], 00:08:53.269 "product_name": "Malloc disk", 00:08:53.269 "block_size": 512, 00:08:53.269 "num_blocks": 65536, 00:08:53.269 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:53.269 "assigned_rate_limits": { 00:08:53.269 "rw_ios_per_sec": 0, 00:08:53.269 "rw_mbytes_per_sec": 0, 00:08:53.269 "r_mbytes_per_sec": 0, 00:08:53.269 "w_mbytes_per_sec": 0 00:08:53.269 }, 00:08:53.269 "claimed": true, 00:08:53.269 "claim_type": "exclusive_write", 00:08:53.269 "zoned": false, 00:08:53.269 "supported_io_types": { 00:08:53.269 "read": true, 00:08:53.269 "write": true, 00:08:53.269 "unmap": true, 00:08:53.269 "flush": true, 00:08:53.269 "reset": true, 00:08:53.269 "nvme_admin": false, 00:08:53.269 "nvme_io": false, 00:08:53.269 "nvme_io_md": false, 00:08:53.269 "write_zeroes": true, 00:08:53.269 "zcopy": true, 00:08:53.269 "get_zone_info": false, 00:08:53.269 "zone_management": false, 00:08:53.269 "zone_append": false, 00:08:53.269 "compare": false, 00:08:53.269 "compare_and_write": false, 00:08:53.269 "abort": true, 00:08:53.269 "seek_hole": false, 00:08:53.269 "seek_data": false, 00:08:53.269 "copy": true, 00:08:53.269 "nvme_iov_md": false 00:08:53.269 }, 00:08:53.269 "memory_domains": [ 00:08:53.269 { 00:08:53.269 "dma_device_id": "system", 00:08:53.269 "dma_device_type": 1 00:08:53.269 }, 00:08:53.269 { 00:08:53.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.269 "dma_device_type": 2 00:08:53.269 } 00:08:53.269 ], 00:08:53.269 "driver_specific": {} 00:08:53.269 } 00:08:53.269 ] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.269 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.270 "name": "Existed_Raid", 00:08:53.270 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:53.270 "strip_size_kb": 64, 00:08:53.270 "state": "online", 00:08:53.270 "raid_level": "concat", 00:08:53.270 "superblock": true, 00:08:53.270 "num_base_bdevs": 3, 00:08:53.270 "num_base_bdevs_discovered": 3, 00:08:53.270 "num_base_bdevs_operational": 3, 00:08:53.270 "base_bdevs_list": [ 00:08:53.270 { 00:08:53.270 "name": "NewBaseBdev", 00:08:53.270 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:53.270 "is_configured": true, 00:08:53.270 "data_offset": 2048, 00:08:53.270 "data_size": 63488 00:08:53.270 }, 00:08:53.270 { 00:08:53.270 "name": "BaseBdev2", 00:08:53.270 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:53.270 "is_configured": true, 00:08:53.270 "data_offset": 2048, 00:08:53.270 "data_size": 63488 00:08:53.270 }, 00:08:53.270 { 00:08:53.270 "name": "BaseBdev3", 00:08:53.270 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:53.270 "is_configured": true, 00:08:53.270 "data_offset": 2048, 00:08:53.270 "data_size": 63488 00:08:53.270 } 00:08:53.270 ] 00:08:53.270 }' 00:08:53.270 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.270 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.844 [2024-11-20 17:01:17.543626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.844 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.844 "name": "Existed_Raid", 00:08:53.844 "aliases": [ 00:08:53.844 "6563e778-3f0a-4851-9c13-f64e65803f65" 00:08:53.844 ], 00:08:53.844 "product_name": "Raid Volume", 00:08:53.844 "block_size": 512, 00:08:53.844 "num_blocks": 190464, 00:08:53.844 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:53.844 "assigned_rate_limits": { 00:08:53.844 "rw_ios_per_sec": 0, 00:08:53.844 "rw_mbytes_per_sec": 0, 00:08:53.844 "r_mbytes_per_sec": 0, 00:08:53.844 "w_mbytes_per_sec": 0 00:08:53.844 }, 00:08:53.844 "claimed": false, 00:08:53.844 "zoned": false, 00:08:53.844 "supported_io_types": { 00:08:53.844 "read": true, 00:08:53.844 "write": true, 00:08:53.844 "unmap": true, 00:08:53.844 "flush": true, 00:08:53.844 "reset": true, 00:08:53.844 "nvme_admin": false, 00:08:53.844 "nvme_io": false, 00:08:53.844 "nvme_io_md": false, 00:08:53.844 "write_zeroes": true, 00:08:53.844 "zcopy": false, 00:08:53.844 "get_zone_info": false, 00:08:53.844 "zone_management": false, 00:08:53.844 "zone_append": false, 00:08:53.844 "compare": false, 00:08:53.844 "compare_and_write": false, 00:08:53.844 "abort": false, 00:08:53.844 "seek_hole": false, 00:08:53.844 "seek_data": false, 00:08:53.844 "copy": false, 00:08:53.845 "nvme_iov_md": false 00:08:53.845 }, 00:08:53.845 "memory_domains": [ 00:08:53.845 { 00:08:53.845 "dma_device_id": "system", 00:08:53.845 "dma_device_type": 1 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.845 "dma_device_type": 2 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "dma_device_id": "system", 00:08:53.845 "dma_device_type": 1 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.845 "dma_device_type": 2 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "dma_device_id": "system", 00:08:53.845 "dma_device_type": 1 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.845 "dma_device_type": 2 00:08:53.845 } 00:08:53.845 ], 00:08:53.845 "driver_specific": { 00:08:53.845 "raid": { 00:08:53.845 "uuid": "6563e778-3f0a-4851-9c13-f64e65803f65", 00:08:53.845 "strip_size_kb": 64, 00:08:53.845 "state": "online", 00:08:53.845 "raid_level": "concat", 00:08:53.845 "superblock": true, 00:08:53.845 "num_base_bdevs": 3, 00:08:53.845 "num_base_bdevs_discovered": 3, 00:08:53.845 "num_base_bdevs_operational": 3, 00:08:53.845 "base_bdevs_list": [ 00:08:53.845 { 00:08:53.845 "name": "NewBaseBdev", 00:08:53.845 "uuid": "e8f141c0-d3c8-4e1d-b8fe-b7982c995983", 00:08:53.845 "is_configured": true, 00:08:53.845 "data_offset": 2048, 00:08:53.845 "data_size": 63488 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "name": "BaseBdev2", 00:08:53.845 "uuid": "94ec919a-9643-418e-be30-2459bbfd7acc", 00:08:53.845 "is_configured": true, 00:08:53.845 "data_offset": 2048, 00:08:53.845 "data_size": 63488 00:08:53.845 }, 00:08:53.845 { 00:08:53.845 "name": "BaseBdev3", 00:08:53.845 "uuid": "4e7d1ffa-3388-4dbb-9d97-bda77e258ac6", 00:08:53.845 "is_configured": true, 00:08:53.845 "data_offset": 2048, 00:08:53.845 "data_size": 63488 00:08:53.845 } 00:08:53.845 ] 00:08:53.845 } 00:08:53.845 } 00:08:53.845 }' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:53.845 BaseBdev2 00:08:53.845 BaseBdev3' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.845 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.117 [2024-11-20 17:01:17.843331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.117 [2024-11-20 17:01:17.843391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.117 [2024-11-20 17:01:17.843484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.117 [2024-11-20 17:01:17.843559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.117 [2024-11-20 17:01:17.843581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66075 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66075 ']' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66075 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66075 00:08:54.117 killing process with pid 66075 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66075' 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66075 00:08:54.117 [2024-11-20 17:01:17.882604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.117 17:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66075 00:08:54.376 [2024-11-20 17:01:18.149118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.752 17:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:55.752 00:08:55.752 real 0m11.822s 00:08:55.752 user 0m19.684s 00:08:55.752 sys 0m1.584s 00:08:55.752 17:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.752 17:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.752 ************************************ 00:08:55.752 END TEST raid_state_function_test_sb 00:08:55.752 ************************************ 00:08:55.752 17:01:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:55.752 17:01:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:55.752 17:01:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.752 17:01:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.752 ************************************ 00:08:55.752 START TEST raid_superblock_test 00:08:55.752 ************************************ 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66706 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66706 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66706 ']' 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.752 17:01:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.752 [2024-11-20 17:01:19.366975] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:08:55.752 [2024-11-20 17:01:19.367160] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66706 ] 00:08:55.752 [2024-11-20 17:01:19.555066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.011 [2024-11-20 17:01:19.706432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.269 [2024-11-20 17:01:19.918358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.269 [2024-11-20 17:01:19.918443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 malloc1 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.528 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 [2024-11-20 17:01:20.393074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:56.528 [2024-11-20 17:01:20.393139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.528 [2024-11-20 17:01:20.393171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:56.528 [2024-11-20 17:01:20.393187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.786 [2024-11-20 17:01:20.396028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.786 [2024-11-20 17:01:20.396072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:56.786 pt1 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.786 malloc2 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.786 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.787 [2024-11-20 17:01:20.450333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.787 [2024-11-20 17:01:20.450395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.787 [2024-11-20 17:01:20.450432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:56.787 [2024-11-20 17:01:20.450448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.787 [2024-11-20 17:01:20.453290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.787 [2024-11-20 17:01:20.453369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.787 pt2 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.787 malloc3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.787 [2024-11-20 17:01:20.515884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:56.787 [2024-11-20 17:01:20.515947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.787 [2024-11-20 17:01:20.515989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:56.787 [2024-11-20 17:01:20.516006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.787 [2024-11-20 17:01:20.519466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.787 [2024-11-20 17:01:20.519523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:56.787 pt3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.787 [2024-11-20 17:01:20.528007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:56.787 [2024-11-20 17:01:20.530709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.787 [2024-11-20 17:01:20.530862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:56.787 [2024-11-20 17:01:20.531084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:56.787 [2024-11-20 17:01:20.531109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.787 [2024-11-20 17:01:20.531448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.787 [2024-11-20 17:01:20.531658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:56.787 [2024-11-20 17:01:20.531674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:56.787 [2024-11-20 17:01:20.531928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.787 "name": "raid_bdev1", 00:08:56.787 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:56.787 "strip_size_kb": 64, 00:08:56.787 "state": "online", 00:08:56.787 "raid_level": "concat", 00:08:56.787 "superblock": true, 00:08:56.787 "num_base_bdevs": 3, 00:08:56.787 "num_base_bdevs_discovered": 3, 00:08:56.787 "num_base_bdevs_operational": 3, 00:08:56.787 "base_bdevs_list": [ 00:08:56.787 { 00:08:56.787 "name": "pt1", 00:08:56.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.787 "is_configured": true, 00:08:56.787 "data_offset": 2048, 00:08:56.787 "data_size": 63488 00:08:56.787 }, 00:08:56.787 { 00:08:56.787 "name": "pt2", 00:08:56.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.787 "is_configured": true, 00:08:56.787 "data_offset": 2048, 00:08:56.787 "data_size": 63488 00:08:56.787 }, 00:08:56.787 { 00:08:56.787 "name": "pt3", 00:08:56.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.787 "is_configured": true, 00:08:56.787 "data_offset": 2048, 00:08:56.787 "data_size": 63488 00:08:56.787 } 00:08:56.787 ] 00:08:56.787 }' 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.787 17:01:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.354 [2024-11-20 17:01:21.060572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.354 "name": "raid_bdev1", 00:08:57.354 "aliases": [ 00:08:57.354 "28fdb1b2-278a-4cae-9d5f-3a72c2871ded" 00:08:57.354 ], 00:08:57.354 "product_name": "Raid Volume", 00:08:57.354 "block_size": 512, 00:08:57.354 "num_blocks": 190464, 00:08:57.354 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:57.354 "assigned_rate_limits": { 00:08:57.354 "rw_ios_per_sec": 0, 00:08:57.354 "rw_mbytes_per_sec": 0, 00:08:57.354 "r_mbytes_per_sec": 0, 00:08:57.354 "w_mbytes_per_sec": 0 00:08:57.354 }, 00:08:57.354 "claimed": false, 00:08:57.354 "zoned": false, 00:08:57.354 "supported_io_types": { 00:08:57.354 "read": true, 00:08:57.354 "write": true, 00:08:57.354 "unmap": true, 00:08:57.354 "flush": true, 00:08:57.354 "reset": true, 00:08:57.354 "nvme_admin": false, 00:08:57.354 "nvme_io": false, 00:08:57.354 "nvme_io_md": false, 00:08:57.354 "write_zeroes": true, 00:08:57.354 "zcopy": false, 00:08:57.354 "get_zone_info": false, 00:08:57.354 "zone_management": false, 00:08:57.354 "zone_append": false, 00:08:57.354 "compare": false, 00:08:57.354 "compare_and_write": false, 00:08:57.354 "abort": false, 00:08:57.354 "seek_hole": false, 00:08:57.354 "seek_data": false, 00:08:57.354 "copy": false, 00:08:57.354 "nvme_iov_md": false 00:08:57.354 }, 00:08:57.354 "memory_domains": [ 00:08:57.354 { 00:08:57.354 "dma_device_id": "system", 00:08:57.354 "dma_device_type": 1 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.354 "dma_device_type": 2 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "dma_device_id": "system", 00:08:57.354 "dma_device_type": 1 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.354 "dma_device_type": 2 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "dma_device_id": "system", 00:08:57.354 "dma_device_type": 1 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.354 "dma_device_type": 2 00:08:57.354 } 00:08:57.354 ], 00:08:57.354 "driver_specific": { 00:08:57.354 "raid": { 00:08:57.354 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:57.354 "strip_size_kb": 64, 00:08:57.354 "state": "online", 00:08:57.354 "raid_level": "concat", 00:08:57.354 "superblock": true, 00:08:57.354 "num_base_bdevs": 3, 00:08:57.354 "num_base_bdevs_discovered": 3, 00:08:57.354 "num_base_bdevs_operational": 3, 00:08:57.354 "base_bdevs_list": [ 00:08:57.354 { 00:08:57.354 "name": "pt1", 00:08:57.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.354 "is_configured": true, 00:08:57.354 "data_offset": 2048, 00:08:57.354 "data_size": 63488 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "name": "pt2", 00:08:57.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.354 "is_configured": true, 00:08:57.354 "data_offset": 2048, 00:08:57.354 "data_size": 63488 00:08:57.354 }, 00:08:57.354 { 00:08:57.354 "name": "pt3", 00:08:57.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.354 "is_configured": true, 00:08:57.354 "data_offset": 2048, 00:08:57.354 "data_size": 63488 00:08:57.354 } 00:08:57.354 ] 00:08:57.354 } 00:08:57.354 } 00:08:57.354 }' 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.354 pt2 00:08:57.354 pt3' 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.354 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.613 [2024-11-20 17:01:21.368538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=28fdb1b2-278a-4cae-9d5f-3a72c2871ded 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 28fdb1b2-278a-4cae-9d5f-3a72c2871ded ']' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.613 [2024-11-20 17:01:21.416215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.613 [2024-11-20 17:01:21.416250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.613 [2024-11-20 17:01:21.416361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.613 [2024-11-20 17:01:21.416465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.613 [2024-11-20 17:01:21.416482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:57.613 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:57.614 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.614 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:57.872 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 [2024-11-20 17:01:21.556358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:57.873 [2024-11-20 17:01:21.558846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:57.873 [2024-11-20 17:01:21.558925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:57.873 [2024-11-20 17:01:21.558998] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:57.873 [2024-11-20 17:01:21.559065] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:57.873 [2024-11-20 17:01:21.559101] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:57.873 [2024-11-20 17:01:21.559129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.873 [2024-11-20 17:01:21.559143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:57.873 request: 00:08:57.873 { 00:08:57.873 "name": "raid_bdev1", 00:08:57.873 "raid_level": "concat", 00:08:57.873 "base_bdevs": [ 00:08:57.873 "malloc1", 00:08:57.873 "malloc2", 00:08:57.873 "malloc3" 00:08:57.873 ], 00:08:57.873 "strip_size_kb": 64, 00:08:57.873 "superblock": false, 00:08:57.873 "method": "bdev_raid_create", 00:08:57.873 "req_id": 1 00:08:57.873 } 00:08:57.873 Got JSON-RPC error response 00:08:57.873 response: 00:08:57.873 { 00:08:57.873 "code": -17, 00:08:57.873 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:57.873 } 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 [2024-11-20 17:01:21.624331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.873 [2024-11-20 17:01:21.624404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.873 [2024-11-20 17:01:21.624432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:57.873 [2024-11-20 17:01:21.624447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.873 [2024-11-20 17:01:21.627445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.873 [2024-11-20 17:01:21.627485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.873 [2024-11-20 17:01:21.627582] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:57.873 [2024-11-20 17:01:21.627648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.873 pt1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.873 "name": "raid_bdev1", 00:08:57.873 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:57.873 "strip_size_kb": 64, 00:08:57.873 "state": "configuring", 00:08:57.873 "raid_level": "concat", 00:08:57.873 "superblock": true, 00:08:57.873 "num_base_bdevs": 3, 00:08:57.873 "num_base_bdevs_discovered": 1, 00:08:57.873 "num_base_bdevs_operational": 3, 00:08:57.873 "base_bdevs_list": [ 00:08:57.873 { 00:08:57.873 "name": "pt1", 00:08:57.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.873 "is_configured": true, 00:08:57.873 "data_offset": 2048, 00:08:57.873 "data_size": 63488 00:08:57.873 }, 00:08:57.873 { 00:08:57.873 "name": null, 00:08:57.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.873 "is_configured": false, 00:08:57.873 "data_offset": 2048, 00:08:57.873 "data_size": 63488 00:08:57.873 }, 00:08:57.873 { 00:08:57.873 "name": null, 00:08:57.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.873 "is_configured": false, 00:08:57.873 "data_offset": 2048, 00:08:57.873 "data_size": 63488 00:08:57.873 } 00:08:57.873 ] 00:08:57.873 }' 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.873 17:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.441 [2024-11-20 17:01:22.152510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.441 [2024-11-20 17:01:22.152602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.441 [2024-11-20 17:01:22.152640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:58.441 [2024-11-20 17:01:22.152655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.441 [2024-11-20 17:01:22.153245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.441 [2024-11-20 17:01:22.153292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.441 [2024-11-20 17:01:22.153398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.441 [2024-11-20 17:01:22.153453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.441 pt2 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.441 [2024-11-20 17:01:22.160488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.441 "name": "raid_bdev1", 00:08:58.441 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:58.441 "strip_size_kb": 64, 00:08:58.441 "state": "configuring", 00:08:58.441 "raid_level": "concat", 00:08:58.441 "superblock": true, 00:08:58.441 "num_base_bdevs": 3, 00:08:58.441 "num_base_bdevs_discovered": 1, 00:08:58.441 "num_base_bdevs_operational": 3, 00:08:58.441 "base_bdevs_list": [ 00:08:58.441 { 00:08:58.441 "name": "pt1", 00:08:58.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.441 "is_configured": true, 00:08:58.441 "data_offset": 2048, 00:08:58.441 "data_size": 63488 00:08:58.441 }, 00:08:58.441 { 00:08:58.441 "name": null, 00:08:58.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.441 "is_configured": false, 00:08:58.441 "data_offset": 0, 00:08:58.441 "data_size": 63488 00:08:58.441 }, 00:08:58.441 { 00:08:58.441 "name": null, 00:08:58.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.441 "is_configured": false, 00:08:58.441 "data_offset": 2048, 00:08:58.441 "data_size": 63488 00:08:58.441 } 00:08:58.441 ] 00:08:58.441 }' 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.441 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.009 [2024-11-20 17:01:22.684691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.009 [2024-11-20 17:01:22.684806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.009 [2024-11-20 17:01:22.684834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:59.009 [2024-11-20 17:01:22.684852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.009 [2024-11-20 17:01:22.685403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.009 [2024-11-20 17:01:22.685434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.009 [2024-11-20 17:01:22.685531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.009 [2024-11-20 17:01:22.685568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.009 pt2 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.009 [2024-11-20 17:01:22.692636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.009 [2024-11-20 17:01:22.692706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.009 [2024-11-20 17:01:22.692728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:59.009 [2024-11-20 17:01:22.692744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.009 [2024-11-20 17:01:22.693195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.009 [2024-11-20 17:01:22.693234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.009 [2024-11-20 17:01:22.693309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.009 [2024-11-20 17:01:22.693342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.009 [2024-11-20 17:01:22.693500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.009 [2024-11-20 17:01:22.693521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.009 [2024-11-20 17:01:22.693845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.009 [2024-11-20 17:01:22.694035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.009 [2024-11-20 17:01:22.694049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.009 [2024-11-20 17:01:22.694214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.009 pt3 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.009 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.010 "name": "raid_bdev1", 00:08:59.010 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:59.010 "strip_size_kb": 64, 00:08:59.010 "state": "online", 00:08:59.010 "raid_level": "concat", 00:08:59.010 "superblock": true, 00:08:59.010 "num_base_bdevs": 3, 00:08:59.010 "num_base_bdevs_discovered": 3, 00:08:59.010 "num_base_bdevs_operational": 3, 00:08:59.010 "base_bdevs_list": [ 00:08:59.010 { 00:08:59.010 "name": "pt1", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 }, 00:08:59.010 { 00:08:59.010 "name": "pt2", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 }, 00:08:59.010 { 00:08:59.010 "name": "pt3", 00:08:59.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.010 "is_configured": true, 00:08:59.010 "data_offset": 2048, 00:08:59.010 "data_size": 63488 00:08:59.010 } 00:08:59.010 ] 00:08:59.010 }' 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.010 17:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 [2024-11-20 17:01:23.221254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.578 "name": "raid_bdev1", 00:08:59.578 "aliases": [ 00:08:59.578 "28fdb1b2-278a-4cae-9d5f-3a72c2871ded" 00:08:59.578 ], 00:08:59.578 "product_name": "Raid Volume", 00:08:59.578 "block_size": 512, 00:08:59.578 "num_blocks": 190464, 00:08:59.578 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:59.578 "assigned_rate_limits": { 00:08:59.578 "rw_ios_per_sec": 0, 00:08:59.578 "rw_mbytes_per_sec": 0, 00:08:59.578 "r_mbytes_per_sec": 0, 00:08:59.578 "w_mbytes_per_sec": 0 00:08:59.578 }, 00:08:59.578 "claimed": false, 00:08:59.578 "zoned": false, 00:08:59.578 "supported_io_types": { 00:08:59.578 "read": true, 00:08:59.578 "write": true, 00:08:59.578 "unmap": true, 00:08:59.578 "flush": true, 00:08:59.578 "reset": true, 00:08:59.578 "nvme_admin": false, 00:08:59.578 "nvme_io": false, 00:08:59.578 "nvme_io_md": false, 00:08:59.578 "write_zeroes": true, 00:08:59.578 "zcopy": false, 00:08:59.578 "get_zone_info": false, 00:08:59.578 "zone_management": false, 00:08:59.578 "zone_append": false, 00:08:59.578 "compare": false, 00:08:59.578 "compare_and_write": false, 00:08:59.578 "abort": false, 00:08:59.578 "seek_hole": false, 00:08:59.578 "seek_data": false, 00:08:59.578 "copy": false, 00:08:59.578 "nvme_iov_md": false 00:08:59.578 }, 00:08:59.578 "memory_domains": [ 00:08:59.578 { 00:08:59.578 "dma_device_id": "system", 00:08:59.578 "dma_device_type": 1 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.578 "dma_device_type": 2 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "system", 00:08:59.578 "dma_device_type": 1 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.578 "dma_device_type": 2 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "system", 00:08:59.578 "dma_device_type": 1 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.578 "dma_device_type": 2 00:08:59.578 } 00:08:59.578 ], 00:08:59.578 "driver_specific": { 00:08:59.578 "raid": { 00:08:59.578 "uuid": "28fdb1b2-278a-4cae-9d5f-3a72c2871ded", 00:08:59.578 "strip_size_kb": 64, 00:08:59.578 "state": "online", 00:08:59.578 "raid_level": "concat", 00:08:59.578 "superblock": true, 00:08:59.578 "num_base_bdevs": 3, 00:08:59.578 "num_base_bdevs_discovered": 3, 00:08:59.578 "num_base_bdevs_operational": 3, 00:08:59.578 "base_bdevs_list": [ 00:08:59.578 { 00:08:59.578 "name": "pt1", 00:08:59.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.578 "is_configured": true, 00:08:59.578 "data_offset": 2048, 00:08:59.578 "data_size": 63488 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "name": "pt2", 00:08:59.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.578 "is_configured": true, 00:08:59.578 "data_offset": 2048, 00:08:59.578 "data_size": 63488 00:08:59.578 }, 00:08:59.578 { 00:08:59.578 "name": "pt3", 00:08:59.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.578 "is_configured": true, 00:08:59.578 "data_offset": 2048, 00:08:59.578 "data_size": 63488 00:08:59.578 } 00:08:59.578 ] 00:08:59.578 } 00:08:59.578 } 00:08:59.578 }' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.578 pt2 00:08:59.578 pt3' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.578 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:59.837 [2024-11-20 17:01:23.537260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 28fdb1b2-278a-4cae-9d5f-3a72c2871ded '!=' 28fdb1b2-278a-4cae-9d5f-3a72c2871ded ']' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66706 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66706 ']' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66706 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66706 00:08:59.837 killing process with pid 66706 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66706' 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66706 00:08:59.837 [2024-11-20 17:01:23.618822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.837 [2024-11-20 17:01:23.618924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.837 17:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66706 00:08:59.837 [2024-11-20 17:01:23.619004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.837 [2024-11-20 17:01:23.619025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.096 [2024-11-20 17:01:23.894984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.473 ************************************ 00:09:01.473 END TEST raid_superblock_test 00:09:01.473 ************************************ 00:09:01.473 17:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:01.473 00:09:01.473 real 0m5.723s 00:09:01.473 user 0m8.617s 00:09:01.473 sys 0m0.803s 00:09:01.473 17:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.473 17:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.473 17:01:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:01.473 17:01:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.473 17:01:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.473 17:01:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.473 ************************************ 00:09:01.473 START TEST raid_read_error_test 00:09:01.473 ************************************ 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RNNcVqZYzj 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66967 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66967 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66967 ']' 00:09:01.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.473 17:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.473 [2024-11-20 17:01:25.158606] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:01.473 [2024-11-20 17:01:25.159080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66967 ] 00:09:01.732 [2024-11-20 17:01:25.348827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.732 [2024-11-20 17:01:25.514664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.990 [2024-11-20 17:01:25.742930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.990 [2024-11-20 17:01:25.742969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 BaseBdev1_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 true 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 [2024-11-20 17:01:26.230896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.602 [2024-11-20 17:01:26.230966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.602 [2024-11-20 17:01:26.230997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.602 [2024-11-20 17:01:26.231017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.602 [2024-11-20 17:01:26.233855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.602 [2024-11-20 17:01:26.233929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.602 BaseBdev1 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 BaseBdev2_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 true 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 [2024-11-20 17:01:26.292753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.602 [2024-11-20 17:01:26.292830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.602 [2024-11-20 17:01:26.292858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.602 [2024-11-20 17:01:26.292878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.602 [2024-11-20 17:01:26.295657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.602 [2024-11-20 17:01:26.295708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.602 BaseBdev2 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 BaseBdev3_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 true 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 [2024-11-20 17:01:26.373442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:02.602 [2024-11-20 17:01:26.373520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.602 [2024-11-20 17:01:26.373578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:02.602 [2024-11-20 17:01:26.373613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.602 [2024-11-20 17:01:26.376558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.602 [2024-11-20 17:01:26.376750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:02.602 BaseBdev3 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 [2024-11-20 17:01:26.381647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.602 [2024-11-20 17:01:26.384073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.602 [2024-11-20 17:01:26.384178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.602 [2024-11-20 17:01:26.384437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.602 [2024-11-20 17:01:26.384460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.602 [2024-11-20 17:01:26.384795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:02.602 [2024-11-20 17:01:26.385081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.602 [2024-11-20 17:01:26.385104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:02.602 [2024-11-20 17:01:26.385292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.602 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.602 "name": "raid_bdev1", 00:09:02.602 "uuid": "e285f2a7-5ee6-4710-a5d7-1ae043758867", 00:09:02.602 "strip_size_kb": 64, 00:09:02.602 "state": "online", 00:09:02.602 "raid_level": "concat", 00:09:02.602 "superblock": true, 00:09:02.602 "num_base_bdevs": 3, 00:09:02.602 "num_base_bdevs_discovered": 3, 00:09:02.602 "num_base_bdevs_operational": 3, 00:09:02.602 "base_bdevs_list": [ 00:09:02.602 { 00:09:02.602 "name": "BaseBdev1", 00:09:02.603 "uuid": "a0b419f1-9dc4-5ded-9f1f-2bbba0de44df", 00:09:02.603 "is_configured": true, 00:09:02.603 "data_offset": 2048, 00:09:02.603 "data_size": 63488 00:09:02.603 }, 00:09:02.603 { 00:09:02.603 "name": "BaseBdev2", 00:09:02.603 "uuid": "9cb98509-5d1e-5cda-8f11-9a529183cf3d", 00:09:02.603 "is_configured": true, 00:09:02.603 "data_offset": 2048, 00:09:02.603 "data_size": 63488 00:09:02.603 }, 00:09:02.603 { 00:09:02.603 "name": "BaseBdev3", 00:09:02.603 "uuid": "afa0c728-67dd-5821-8507-d434cbdd8ee0", 00:09:02.603 "is_configured": true, 00:09:02.603 "data_offset": 2048, 00:09:02.603 "data_size": 63488 00:09:02.603 } 00:09:02.603 ] 00:09:02.603 }' 00:09:02.603 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.603 17:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.170 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.170 17:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.170 [2024-11-20 17:01:27.019217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.110 "name": "raid_bdev1", 00:09:04.110 "uuid": "e285f2a7-5ee6-4710-a5d7-1ae043758867", 00:09:04.110 "strip_size_kb": 64, 00:09:04.110 "state": "online", 00:09:04.110 "raid_level": "concat", 00:09:04.110 "superblock": true, 00:09:04.110 "num_base_bdevs": 3, 00:09:04.110 "num_base_bdevs_discovered": 3, 00:09:04.110 "num_base_bdevs_operational": 3, 00:09:04.110 "base_bdevs_list": [ 00:09:04.110 { 00:09:04.110 "name": "BaseBdev1", 00:09:04.110 "uuid": "a0b419f1-9dc4-5ded-9f1f-2bbba0de44df", 00:09:04.110 "is_configured": true, 00:09:04.110 "data_offset": 2048, 00:09:04.110 "data_size": 63488 00:09:04.110 }, 00:09:04.110 { 00:09:04.110 "name": "BaseBdev2", 00:09:04.110 "uuid": "9cb98509-5d1e-5cda-8f11-9a529183cf3d", 00:09:04.110 "is_configured": true, 00:09:04.110 "data_offset": 2048, 00:09:04.110 "data_size": 63488 00:09:04.110 }, 00:09:04.110 { 00:09:04.110 "name": "BaseBdev3", 00:09:04.110 "uuid": "afa0c728-67dd-5821-8507-d434cbdd8ee0", 00:09:04.110 "is_configured": true, 00:09:04.110 "data_offset": 2048, 00:09:04.110 "data_size": 63488 00:09:04.110 } 00:09:04.110 ] 00:09:04.110 }' 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.110 17:01:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.679 [2024-11-20 17:01:28.425523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.679 [2024-11-20 17:01:28.425723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.679 [2024-11-20 17:01:28.429576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.679 [2024-11-20 17:01:28.429854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.679 [2024-11-20 17:01:28.429965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.679 [2024-11-20 17:01:28.430215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:04.679 { 00:09:04.679 "results": [ 00:09:04.679 { 00:09:04.679 "job": "raid_bdev1", 00:09:04.679 "core_mask": "0x1", 00:09:04.679 "workload": "randrw", 00:09:04.679 "percentage": 50, 00:09:04.679 "status": "finished", 00:09:04.679 "queue_depth": 1, 00:09:04.679 "io_size": 131072, 00:09:04.679 "runtime": 1.404186, 00:09:04.679 "iops": 10056.360054864526, 00:09:04.679 "mibps": 1257.0450068580658, 00:09:04.679 "io_failed": 1, 00:09:04.679 "io_timeout": 0, 00:09:04.679 "avg_latency_us": 138.24266148240656, 00:09:04.679 "min_latency_us": 39.33090909090909, 00:09:04.679 "max_latency_us": 2353.338181818182 00:09:04.679 } 00:09:04.679 ], 00:09:04.679 "core_count": 1 00:09:04.679 } 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66967 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66967 ']' 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66967 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66967 00:09:04.679 killing process with pid 66967 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66967' 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66967 00:09:04.679 [2024-11-20 17:01:28.470956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.679 17:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66967 00:09:04.939 [2024-11-20 17:01:28.680095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RNNcVqZYzj 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:06.319 00:09:06.319 real 0m4.774s 00:09:06.319 user 0m5.885s 00:09:06.319 sys 0m0.628s 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.319 ************************************ 00:09:06.319 END TEST raid_read_error_test 00:09:06.319 ************************************ 00:09:06.319 17:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.319 17:01:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:06.319 17:01:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.319 17:01:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.319 17:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.319 ************************************ 00:09:06.319 START TEST raid_write_error_test 00:09:06.319 ************************************ 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3ZbxOJMIoF 00:09:06.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67113 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67113 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67113 ']' 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.319 17:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.319 [2024-11-20 17:01:29.981921] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:06.319 [2024-11-20 17:01:29.982096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 00:09:06.319 [2024-11-20 17:01:30.170085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.578 [2024-11-20 17:01:30.297354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.837 [2024-11-20 17:01:30.504712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.837 [2024-11-20 17:01:30.505445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.096 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 BaseBdev1_malloc 00:09:07.356 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:07.356 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 true 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 [2024-11-20 17:01:31.015954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.356 [2024-11-20 17:01:31.016023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.356 [2024-11-20 17:01:31.016054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:07.356 [2024-11-20 17:01:31.016072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.356 [2024-11-20 17:01:31.018943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.356 [2024-11-20 17:01:31.018993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.356 BaseBdev1 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 BaseBdev2_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 true 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 [2024-11-20 17:01:31.076524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.356 [2024-11-20 17:01:31.076786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.356 [2024-11-20 17:01:31.076821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.356 [2024-11-20 17:01:31.076840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.356 [2024-11-20 17:01:31.079677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.356 [2024-11-20 17:01:31.079740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.356 BaseBdev2 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 BaseBdev3_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 true 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 [2024-11-20 17:01:31.150176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:07.356 [2024-11-20 17:01:31.150237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.356 [2024-11-20 17:01:31.150264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:07.356 [2024-11-20 17:01:31.150282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.356 [2024-11-20 17:01:31.153179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.356 [2024-11-20 17:01:31.153397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:07.356 BaseBdev3 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 [2024-11-20 17:01:31.162307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.356 [2024-11-20 17:01:31.164827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.356 [2024-11-20 17:01:31.164984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.356 [2024-11-20 17:01:31.165244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:07.356 [2024-11-20 17:01:31.165269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.356 [2024-11-20 17:01:31.165589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:07.356 [2024-11-20 17:01:31.165854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:07.356 [2024-11-20 17:01:31.165878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:07.356 [2024-11-20 17:01:31.166087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.356 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.356 "name": "raid_bdev1", 00:09:07.356 "uuid": "3c386813-3844-4c8e-916c-8d8fe6105cf6", 00:09:07.356 "strip_size_kb": 64, 00:09:07.356 "state": "online", 00:09:07.356 "raid_level": "concat", 00:09:07.356 "superblock": true, 00:09:07.356 "num_base_bdevs": 3, 00:09:07.356 "num_base_bdevs_discovered": 3, 00:09:07.356 "num_base_bdevs_operational": 3, 00:09:07.356 "base_bdevs_list": [ 00:09:07.356 { 00:09:07.356 "name": "BaseBdev1", 00:09:07.356 "uuid": "69edc65d-91cb-56b0-a2de-a362d9262a80", 00:09:07.356 "is_configured": true, 00:09:07.356 "data_offset": 2048, 00:09:07.356 "data_size": 63488 00:09:07.356 }, 00:09:07.356 { 00:09:07.356 "name": "BaseBdev2", 00:09:07.356 "uuid": "72959e30-abf2-533f-b3e4-9ae06c739c94", 00:09:07.356 "is_configured": true, 00:09:07.356 "data_offset": 2048, 00:09:07.356 "data_size": 63488 00:09:07.356 }, 00:09:07.356 { 00:09:07.356 "name": "BaseBdev3", 00:09:07.356 "uuid": "ffa03fef-08e8-573b-aacc-1077cfd8c365", 00:09:07.356 "is_configured": true, 00:09:07.356 "data_offset": 2048, 00:09:07.357 "data_size": 63488 00:09:07.357 } 00:09:07.357 ] 00:09:07.357 }' 00:09:07.357 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.357 17:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.925 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:07.925 17:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:08.183 [2024-11-20 17:01:31.811981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.120 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.120 "name": "raid_bdev1", 00:09:09.120 "uuid": "3c386813-3844-4c8e-916c-8d8fe6105cf6", 00:09:09.120 "strip_size_kb": 64, 00:09:09.120 "state": "online", 00:09:09.120 "raid_level": "concat", 00:09:09.121 "superblock": true, 00:09:09.121 "num_base_bdevs": 3, 00:09:09.121 "num_base_bdevs_discovered": 3, 00:09:09.121 "num_base_bdevs_operational": 3, 00:09:09.121 "base_bdevs_list": [ 00:09:09.121 { 00:09:09.121 "name": "BaseBdev1", 00:09:09.121 "uuid": "69edc65d-91cb-56b0-a2de-a362d9262a80", 00:09:09.121 "is_configured": true, 00:09:09.121 "data_offset": 2048, 00:09:09.121 "data_size": 63488 00:09:09.121 }, 00:09:09.121 { 00:09:09.121 "name": "BaseBdev2", 00:09:09.121 "uuid": "72959e30-abf2-533f-b3e4-9ae06c739c94", 00:09:09.121 "is_configured": true, 00:09:09.121 "data_offset": 2048, 00:09:09.121 "data_size": 63488 00:09:09.121 }, 00:09:09.121 { 00:09:09.121 "name": "BaseBdev3", 00:09:09.121 "uuid": "ffa03fef-08e8-573b-aacc-1077cfd8c365", 00:09:09.121 "is_configured": true, 00:09:09.121 "data_offset": 2048, 00:09:09.121 "data_size": 63488 00:09:09.121 } 00:09:09.121 ] 00:09:09.121 }' 00:09:09.121 17:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.121 17:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.380 [2024-11-20 17:01:33.227022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.380 [2024-11-20 17:01:33.227057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.380 [2024-11-20 17:01:33.230820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.380 [2024-11-20 17:01:33.231020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.380 [2024-11-20 17:01:33.231123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.380 [2024-11-20 17:01:33.231314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:09.380 { 00:09:09.380 "results": [ 00:09:09.380 { 00:09:09.380 "job": "raid_bdev1", 00:09:09.380 "core_mask": "0x1", 00:09:09.380 "workload": "randrw", 00:09:09.380 "percentage": 50, 00:09:09.380 "status": "finished", 00:09:09.380 "queue_depth": 1, 00:09:09.380 "io_size": 131072, 00:09:09.380 "runtime": 1.412572, 00:09:09.380 "iops": 10857.499653115026, 00:09:09.380 "mibps": 1357.1874566393783, 00:09:09.380 "io_failed": 1, 00:09:09.380 "io_timeout": 0, 00:09:09.380 "avg_latency_us": 127.77308408112945, 00:09:09.380 "min_latency_us": 37.70181818181818, 00:09:09.380 "max_latency_us": 1899.0545454545454 00:09:09.380 } 00:09:09.380 ], 00:09:09.380 "core_count": 1 00:09:09.380 } 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67113 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67113 ']' 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67113 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.380 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67113 00:09:09.640 killing process with pid 67113 00:09:09.640 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.640 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.640 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67113' 00:09:09.640 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67113 00:09:09.640 [2024-11-20 17:01:33.267409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.640 17:01:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67113 00:09:09.640 [2024-11-20 17:01:33.473254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3ZbxOJMIoF 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:11.017 ************************************ 00:09:11.017 END TEST raid_write_error_test 00:09:11.017 ************************************ 00:09:11.017 00:09:11.017 real 0m4.719s 00:09:11.017 user 0m5.891s 00:09:11.017 sys 0m0.565s 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.017 17:01:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 17:01:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:11.017 17:01:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:11.017 17:01:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.017 17:01:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.017 17:01:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 ************************************ 00:09:11.017 START TEST raid_state_function_test 00:09:11.017 ************************************ 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:11.017 Process raid pid: 67256 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67256 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67256' 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67256 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67256 ']' 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.017 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.018 17:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.018 [2024-11-20 17:01:34.755047] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:11.018 [2024-11-20 17:01:34.755221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.277 [2024-11-20 17:01:34.946550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.277 [2024-11-20 17:01:35.107235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.535 [2024-11-20 17:01:35.333975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.535 [2024-11-20 17:01:35.334041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.102 [2024-11-20 17:01:35.821245] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.102 [2024-11-20 17:01:35.821323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.102 [2024-11-20 17:01:35.821341] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.102 [2024-11-20 17:01:35.821356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.102 [2024-11-20 17:01:35.821366] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.102 [2024-11-20 17:01:35.821379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.102 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.103 "name": "Existed_Raid", 00:09:12.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.103 "strip_size_kb": 0, 00:09:12.103 "state": "configuring", 00:09:12.103 "raid_level": "raid1", 00:09:12.103 "superblock": false, 00:09:12.103 "num_base_bdevs": 3, 00:09:12.103 "num_base_bdevs_discovered": 0, 00:09:12.103 "num_base_bdevs_operational": 3, 00:09:12.103 "base_bdevs_list": [ 00:09:12.103 { 00:09:12.103 "name": "BaseBdev1", 00:09:12.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.103 "is_configured": false, 00:09:12.103 "data_offset": 0, 00:09:12.103 "data_size": 0 00:09:12.103 }, 00:09:12.103 { 00:09:12.103 "name": "BaseBdev2", 00:09:12.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.103 "is_configured": false, 00:09:12.103 "data_offset": 0, 00:09:12.103 "data_size": 0 00:09:12.103 }, 00:09:12.103 { 00:09:12.103 "name": "BaseBdev3", 00:09:12.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.103 "is_configured": false, 00:09:12.103 "data_offset": 0, 00:09:12.103 "data_size": 0 00:09:12.103 } 00:09:12.103 ] 00:09:12.103 }' 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.103 17:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 [2024-11-20 17:01:36.381484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.670 [2024-11-20 17:01:36.381527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 [2024-11-20 17:01:36.389349] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.670 [2024-11-20 17:01:36.389414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.670 [2024-11-20 17:01:36.389443] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.670 [2024-11-20 17:01:36.389457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.670 [2024-11-20 17:01:36.389465] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.670 [2024-11-20 17:01:36.389478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 [2024-11-20 17:01:36.436642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.670 BaseBdev1 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 [ 00:09:12.670 { 00:09:12.670 "name": "BaseBdev1", 00:09:12.670 "aliases": [ 00:09:12.670 "1acaf67e-2416-4765-bc16-87cd4ce678d6" 00:09:12.670 ], 00:09:12.670 "product_name": "Malloc disk", 00:09:12.670 "block_size": 512, 00:09:12.670 "num_blocks": 65536, 00:09:12.670 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:12.670 "assigned_rate_limits": { 00:09:12.670 "rw_ios_per_sec": 0, 00:09:12.670 "rw_mbytes_per_sec": 0, 00:09:12.670 "r_mbytes_per_sec": 0, 00:09:12.670 "w_mbytes_per_sec": 0 00:09:12.670 }, 00:09:12.670 "claimed": true, 00:09:12.670 "claim_type": "exclusive_write", 00:09:12.670 "zoned": false, 00:09:12.670 "supported_io_types": { 00:09:12.670 "read": true, 00:09:12.670 "write": true, 00:09:12.670 "unmap": true, 00:09:12.670 "flush": true, 00:09:12.670 "reset": true, 00:09:12.670 "nvme_admin": false, 00:09:12.670 "nvme_io": false, 00:09:12.670 "nvme_io_md": false, 00:09:12.670 "write_zeroes": true, 00:09:12.670 "zcopy": true, 00:09:12.670 "get_zone_info": false, 00:09:12.670 "zone_management": false, 00:09:12.670 "zone_append": false, 00:09:12.670 "compare": false, 00:09:12.670 "compare_and_write": false, 00:09:12.670 "abort": true, 00:09:12.670 "seek_hole": false, 00:09:12.670 "seek_data": false, 00:09:12.670 "copy": true, 00:09:12.670 "nvme_iov_md": false 00:09:12.670 }, 00:09:12.670 "memory_domains": [ 00:09:12.670 { 00:09:12.670 "dma_device_id": "system", 00:09:12.670 "dma_device_type": 1 00:09:12.670 }, 00:09:12.670 { 00:09:12.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.670 "dma_device_type": 2 00:09:12.670 } 00:09:12.670 ], 00:09:12.670 "driver_specific": {} 00:09:12.670 } 00:09:12.670 ] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.670 "name": "Existed_Raid", 00:09:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.670 "strip_size_kb": 0, 00:09:12.670 "state": "configuring", 00:09:12.670 "raid_level": "raid1", 00:09:12.670 "superblock": false, 00:09:12.670 "num_base_bdevs": 3, 00:09:12.670 "num_base_bdevs_discovered": 1, 00:09:12.670 "num_base_bdevs_operational": 3, 00:09:12.670 "base_bdevs_list": [ 00:09:12.670 { 00:09:12.670 "name": "BaseBdev1", 00:09:12.670 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:12.670 "is_configured": true, 00:09:12.670 "data_offset": 0, 00:09:12.670 "data_size": 65536 00:09:12.670 }, 00:09:12.670 { 00:09:12.670 "name": "BaseBdev2", 00:09:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.670 "is_configured": false, 00:09:12.670 "data_offset": 0, 00:09:12.670 "data_size": 0 00:09:12.670 }, 00:09:12.670 { 00:09:12.670 "name": "BaseBdev3", 00:09:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.670 "is_configured": false, 00:09:12.670 "data_offset": 0, 00:09:12.670 "data_size": 0 00:09:12.670 } 00:09:12.670 ] 00:09:12.670 }' 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.670 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.238 17:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.238 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.238 17:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.238 [2024-11-20 17:01:37.000981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.238 [2024-11-20 17:01:37.001064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.238 [2024-11-20 17:01:37.013069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.238 [2024-11-20 17:01:37.015881] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.238 [2024-11-20 17:01:37.016079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.238 [2024-11-20 17:01:37.016246] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.238 [2024-11-20 17:01:37.016375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.238 "name": "Existed_Raid", 00:09:13.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.238 "strip_size_kb": 0, 00:09:13.238 "state": "configuring", 00:09:13.238 "raid_level": "raid1", 00:09:13.238 "superblock": false, 00:09:13.238 "num_base_bdevs": 3, 00:09:13.238 "num_base_bdevs_discovered": 1, 00:09:13.238 "num_base_bdevs_operational": 3, 00:09:13.238 "base_bdevs_list": [ 00:09:13.238 { 00:09:13.238 "name": "BaseBdev1", 00:09:13.238 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:13.238 "is_configured": true, 00:09:13.238 "data_offset": 0, 00:09:13.238 "data_size": 65536 00:09:13.238 }, 00:09:13.238 { 00:09:13.238 "name": "BaseBdev2", 00:09:13.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.238 "is_configured": false, 00:09:13.238 "data_offset": 0, 00:09:13.238 "data_size": 0 00:09:13.238 }, 00:09:13.238 { 00:09:13.238 "name": "BaseBdev3", 00:09:13.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.238 "is_configured": false, 00:09:13.238 "data_offset": 0, 00:09:13.238 "data_size": 0 00:09:13.238 } 00:09:13.238 ] 00:09:13.238 }' 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.238 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.805 [2024-11-20 17:01:37.557814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.805 BaseBdev2 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.805 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.805 [ 00:09:13.805 { 00:09:13.805 "name": "BaseBdev2", 00:09:13.805 "aliases": [ 00:09:13.805 "db388e5f-2c67-4ba9-8311-15c476d231b6" 00:09:13.805 ], 00:09:13.805 "product_name": "Malloc disk", 00:09:13.805 "block_size": 512, 00:09:13.805 "num_blocks": 65536, 00:09:13.805 "uuid": "db388e5f-2c67-4ba9-8311-15c476d231b6", 00:09:13.805 "assigned_rate_limits": { 00:09:13.805 "rw_ios_per_sec": 0, 00:09:13.805 "rw_mbytes_per_sec": 0, 00:09:13.805 "r_mbytes_per_sec": 0, 00:09:13.805 "w_mbytes_per_sec": 0 00:09:13.805 }, 00:09:13.806 "claimed": true, 00:09:13.806 "claim_type": "exclusive_write", 00:09:13.806 "zoned": false, 00:09:13.806 "supported_io_types": { 00:09:13.806 "read": true, 00:09:13.806 "write": true, 00:09:13.806 "unmap": true, 00:09:13.806 "flush": true, 00:09:13.806 "reset": true, 00:09:13.806 "nvme_admin": false, 00:09:13.806 "nvme_io": false, 00:09:13.806 "nvme_io_md": false, 00:09:13.806 "write_zeroes": true, 00:09:13.806 "zcopy": true, 00:09:13.806 "get_zone_info": false, 00:09:13.806 "zone_management": false, 00:09:13.806 "zone_append": false, 00:09:13.806 "compare": false, 00:09:13.806 "compare_and_write": false, 00:09:13.806 "abort": true, 00:09:13.806 "seek_hole": false, 00:09:13.806 "seek_data": false, 00:09:13.806 "copy": true, 00:09:13.806 "nvme_iov_md": false 00:09:13.806 }, 00:09:13.806 "memory_domains": [ 00:09:13.806 { 00:09:13.806 "dma_device_id": "system", 00:09:13.806 "dma_device_type": 1 00:09:13.806 }, 00:09:13.806 { 00:09:13.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.806 "dma_device_type": 2 00:09:13.806 } 00:09:13.806 ], 00:09:13.806 "driver_specific": {} 00:09:13.806 } 00:09:13.806 ] 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.806 "name": "Existed_Raid", 00:09:13.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.806 "strip_size_kb": 0, 00:09:13.806 "state": "configuring", 00:09:13.806 "raid_level": "raid1", 00:09:13.806 "superblock": false, 00:09:13.806 "num_base_bdevs": 3, 00:09:13.806 "num_base_bdevs_discovered": 2, 00:09:13.806 "num_base_bdevs_operational": 3, 00:09:13.806 "base_bdevs_list": [ 00:09:13.806 { 00:09:13.806 "name": "BaseBdev1", 00:09:13.806 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:13.806 "is_configured": true, 00:09:13.806 "data_offset": 0, 00:09:13.806 "data_size": 65536 00:09:13.806 }, 00:09:13.806 { 00:09:13.806 "name": "BaseBdev2", 00:09:13.806 "uuid": "db388e5f-2c67-4ba9-8311-15c476d231b6", 00:09:13.806 "is_configured": true, 00:09:13.806 "data_offset": 0, 00:09:13.806 "data_size": 65536 00:09:13.806 }, 00:09:13.806 { 00:09:13.806 "name": "BaseBdev3", 00:09:13.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.806 "is_configured": false, 00:09:13.806 "data_offset": 0, 00:09:13.806 "data_size": 0 00:09:13.806 } 00:09:13.806 ] 00:09:13.806 }' 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.806 17:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 [2024-11-20 17:01:38.157549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.374 [2024-11-20 17:01:38.157922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.374 [2024-11-20 17:01:38.157964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:14.374 [2024-11-20 17:01:38.158403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:14.374 [2024-11-20 17:01:38.158691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.374 [2024-11-20 17:01:38.158711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.374 [2024-11-20 17:01:38.159167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.374 BaseBdev3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 [ 00:09:14.374 { 00:09:14.374 "name": "BaseBdev3", 00:09:14.374 "aliases": [ 00:09:14.374 "998754f7-ce61-434e-b93c-0cabf56b840e" 00:09:14.374 ], 00:09:14.374 "product_name": "Malloc disk", 00:09:14.374 "block_size": 512, 00:09:14.374 "num_blocks": 65536, 00:09:14.374 "uuid": "998754f7-ce61-434e-b93c-0cabf56b840e", 00:09:14.374 "assigned_rate_limits": { 00:09:14.374 "rw_ios_per_sec": 0, 00:09:14.374 "rw_mbytes_per_sec": 0, 00:09:14.374 "r_mbytes_per_sec": 0, 00:09:14.374 "w_mbytes_per_sec": 0 00:09:14.374 }, 00:09:14.374 "claimed": true, 00:09:14.374 "claim_type": "exclusive_write", 00:09:14.374 "zoned": false, 00:09:14.374 "supported_io_types": { 00:09:14.374 "read": true, 00:09:14.374 "write": true, 00:09:14.374 "unmap": true, 00:09:14.374 "flush": true, 00:09:14.374 "reset": true, 00:09:14.374 "nvme_admin": false, 00:09:14.374 "nvme_io": false, 00:09:14.374 "nvme_io_md": false, 00:09:14.374 "write_zeroes": true, 00:09:14.374 "zcopy": true, 00:09:14.374 "get_zone_info": false, 00:09:14.374 "zone_management": false, 00:09:14.374 "zone_append": false, 00:09:14.374 "compare": false, 00:09:14.374 "compare_and_write": false, 00:09:14.374 "abort": true, 00:09:14.374 "seek_hole": false, 00:09:14.374 "seek_data": false, 00:09:14.374 "copy": true, 00:09:14.374 "nvme_iov_md": false 00:09:14.374 }, 00:09:14.374 "memory_domains": [ 00:09:14.374 { 00:09:14.374 "dma_device_id": "system", 00:09:14.374 "dma_device_type": 1 00:09:14.374 }, 00:09:14.374 { 00:09:14.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.374 "dma_device_type": 2 00:09:14.374 } 00:09:14.374 ], 00:09:14.374 "driver_specific": {} 00:09:14.374 } 00:09:14.374 ] 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.374 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.632 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.632 "name": "Existed_Raid", 00:09:14.632 "uuid": "7a08ce41-1323-4995-9b92-e0ced7a1d94a", 00:09:14.632 "strip_size_kb": 0, 00:09:14.632 "state": "online", 00:09:14.632 "raid_level": "raid1", 00:09:14.632 "superblock": false, 00:09:14.632 "num_base_bdevs": 3, 00:09:14.632 "num_base_bdevs_discovered": 3, 00:09:14.632 "num_base_bdevs_operational": 3, 00:09:14.632 "base_bdevs_list": [ 00:09:14.632 { 00:09:14.632 "name": "BaseBdev1", 00:09:14.632 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:14.632 "is_configured": true, 00:09:14.632 "data_offset": 0, 00:09:14.632 "data_size": 65536 00:09:14.632 }, 00:09:14.632 { 00:09:14.632 "name": "BaseBdev2", 00:09:14.632 "uuid": "db388e5f-2c67-4ba9-8311-15c476d231b6", 00:09:14.633 "is_configured": true, 00:09:14.633 "data_offset": 0, 00:09:14.633 "data_size": 65536 00:09:14.633 }, 00:09:14.633 { 00:09:14.633 "name": "BaseBdev3", 00:09:14.633 "uuid": "998754f7-ce61-434e-b93c-0cabf56b840e", 00:09:14.633 "is_configured": true, 00:09:14.633 "data_offset": 0, 00:09:14.633 "data_size": 65536 00:09:14.633 } 00:09:14.633 ] 00:09:14.633 }' 00:09:14.633 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.633 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.891 [2024-11-20 17:01:38.714267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.891 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.891 "name": "Existed_Raid", 00:09:14.891 "aliases": [ 00:09:14.891 "7a08ce41-1323-4995-9b92-e0ced7a1d94a" 00:09:14.891 ], 00:09:14.891 "product_name": "Raid Volume", 00:09:14.891 "block_size": 512, 00:09:14.891 "num_blocks": 65536, 00:09:14.891 "uuid": "7a08ce41-1323-4995-9b92-e0ced7a1d94a", 00:09:14.891 "assigned_rate_limits": { 00:09:14.891 "rw_ios_per_sec": 0, 00:09:14.891 "rw_mbytes_per_sec": 0, 00:09:14.891 "r_mbytes_per_sec": 0, 00:09:14.891 "w_mbytes_per_sec": 0 00:09:14.891 }, 00:09:14.891 "claimed": false, 00:09:14.891 "zoned": false, 00:09:14.891 "supported_io_types": { 00:09:14.891 "read": true, 00:09:14.891 "write": true, 00:09:14.891 "unmap": false, 00:09:14.891 "flush": false, 00:09:14.891 "reset": true, 00:09:14.891 "nvme_admin": false, 00:09:14.891 "nvme_io": false, 00:09:14.892 "nvme_io_md": false, 00:09:14.892 "write_zeroes": true, 00:09:14.892 "zcopy": false, 00:09:14.892 "get_zone_info": false, 00:09:14.892 "zone_management": false, 00:09:14.892 "zone_append": false, 00:09:14.892 "compare": false, 00:09:14.892 "compare_and_write": false, 00:09:14.892 "abort": false, 00:09:14.892 "seek_hole": false, 00:09:14.892 "seek_data": false, 00:09:14.892 "copy": false, 00:09:14.892 "nvme_iov_md": false 00:09:14.892 }, 00:09:14.892 "memory_domains": [ 00:09:14.892 { 00:09:14.892 "dma_device_id": "system", 00:09:14.892 "dma_device_type": 1 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.892 "dma_device_type": 2 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "dma_device_id": "system", 00:09:14.892 "dma_device_type": 1 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.892 "dma_device_type": 2 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "dma_device_id": "system", 00:09:14.892 "dma_device_type": 1 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.892 "dma_device_type": 2 00:09:14.892 } 00:09:14.892 ], 00:09:14.892 "driver_specific": { 00:09:14.892 "raid": { 00:09:14.892 "uuid": "7a08ce41-1323-4995-9b92-e0ced7a1d94a", 00:09:14.892 "strip_size_kb": 0, 00:09:14.892 "state": "online", 00:09:14.892 "raid_level": "raid1", 00:09:14.892 "superblock": false, 00:09:14.892 "num_base_bdevs": 3, 00:09:14.892 "num_base_bdevs_discovered": 3, 00:09:14.892 "num_base_bdevs_operational": 3, 00:09:14.892 "base_bdevs_list": [ 00:09:14.892 { 00:09:14.892 "name": "BaseBdev1", 00:09:14.892 "uuid": "1acaf67e-2416-4765-bc16-87cd4ce678d6", 00:09:14.892 "is_configured": true, 00:09:14.892 "data_offset": 0, 00:09:14.892 "data_size": 65536 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "name": "BaseBdev2", 00:09:14.892 "uuid": "db388e5f-2c67-4ba9-8311-15c476d231b6", 00:09:14.892 "is_configured": true, 00:09:14.892 "data_offset": 0, 00:09:14.892 "data_size": 65536 00:09:14.892 }, 00:09:14.892 { 00:09:14.892 "name": "BaseBdev3", 00:09:14.892 "uuid": "998754f7-ce61-434e-b93c-0cabf56b840e", 00:09:14.892 "is_configured": true, 00:09:14.892 "data_offset": 0, 00:09:14.892 "data_size": 65536 00:09:14.892 } 00:09:14.892 ] 00:09:14.892 } 00:09:14.892 } 00:09:14.892 }' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:15.150 BaseBdev2 00:09:15.150 BaseBdev3' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.150 17:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.410 [2024-11-20 17:01:39.030040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.410 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.411 "name": "Existed_Raid", 00:09:15.411 "uuid": "7a08ce41-1323-4995-9b92-e0ced7a1d94a", 00:09:15.411 "strip_size_kb": 0, 00:09:15.411 "state": "online", 00:09:15.411 "raid_level": "raid1", 00:09:15.411 "superblock": false, 00:09:15.411 "num_base_bdevs": 3, 00:09:15.411 "num_base_bdevs_discovered": 2, 00:09:15.411 "num_base_bdevs_operational": 2, 00:09:15.411 "base_bdevs_list": [ 00:09:15.411 { 00:09:15.411 "name": null, 00:09:15.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.411 "is_configured": false, 00:09:15.411 "data_offset": 0, 00:09:15.411 "data_size": 65536 00:09:15.411 }, 00:09:15.411 { 00:09:15.411 "name": "BaseBdev2", 00:09:15.411 "uuid": "db388e5f-2c67-4ba9-8311-15c476d231b6", 00:09:15.411 "is_configured": true, 00:09:15.411 "data_offset": 0, 00:09:15.411 "data_size": 65536 00:09:15.411 }, 00:09:15.411 { 00:09:15.411 "name": "BaseBdev3", 00:09:15.411 "uuid": "998754f7-ce61-434e-b93c-0cabf56b840e", 00:09:15.411 "is_configured": true, 00:09:15.411 "data_offset": 0, 00:09:15.411 "data_size": 65536 00:09:15.411 } 00:09:15.411 ] 00:09:15.411 }' 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.411 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.978 [2024-11-20 17:01:39.697380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.978 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.978 [2024-11-20 17:01:39.831186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.978 [2024-11-20 17:01:39.831496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.237 [2024-11-20 17:01:39.911365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.238 [2024-11-20 17:01:39.911460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.238 [2024-11-20 17:01:39.911480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 BaseBdev2 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 [ 00:09:16.238 { 00:09:16.238 "name": "BaseBdev2", 00:09:16.238 "aliases": [ 00:09:16.238 "cfad85a7-8345-4018-95ec-a0803004821e" 00:09:16.238 ], 00:09:16.238 "product_name": "Malloc disk", 00:09:16.238 "block_size": 512, 00:09:16.238 "num_blocks": 65536, 00:09:16.238 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:16.238 "assigned_rate_limits": { 00:09:16.238 "rw_ios_per_sec": 0, 00:09:16.238 "rw_mbytes_per_sec": 0, 00:09:16.238 "r_mbytes_per_sec": 0, 00:09:16.238 "w_mbytes_per_sec": 0 00:09:16.238 }, 00:09:16.238 "claimed": false, 00:09:16.238 "zoned": false, 00:09:16.238 "supported_io_types": { 00:09:16.238 "read": true, 00:09:16.238 "write": true, 00:09:16.238 "unmap": true, 00:09:16.238 "flush": true, 00:09:16.238 "reset": true, 00:09:16.238 "nvme_admin": false, 00:09:16.238 "nvme_io": false, 00:09:16.238 "nvme_io_md": false, 00:09:16.238 "write_zeroes": true, 00:09:16.238 "zcopy": true, 00:09:16.238 "get_zone_info": false, 00:09:16.238 "zone_management": false, 00:09:16.238 "zone_append": false, 00:09:16.238 "compare": false, 00:09:16.238 "compare_and_write": false, 00:09:16.238 "abort": true, 00:09:16.238 "seek_hole": false, 00:09:16.238 "seek_data": false, 00:09:16.238 "copy": true, 00:09:16.238 "nvme_iov_md": false 00:09:16.238 }, 00:09:16.238 "memory_domains": [ 00:09:16.238 { 00:09:16.238 "dma_device_id": "system", 00:09:16.238 "dma_device_type": 1 00:09:16.238 }, 00:09:16.238 { 00:09:16.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.238 "dma_device_type": 2 00:09:16.238 } 00:09:16.238 ], 00:09:16.238 "driver_specific": {} 00:09:16.238 } 00:09:16.238 ] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 BaseBdev3 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.238 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.238 [ 00:09:16.238 { 00:09:16.238 "name": "BaseBdev3", 00:09:16.238 "aliases": [ 00:09:16.238 "ac687855-f7f4-4d37-a2d6-5dfc32c73203" 00:09:16.238 ], 00:09:16.238 "product_name": "Malloc disk", 00:09:16.238 "block_size": 512, 00:09:16.238 "num_blocks": 65536, 00:09:16.238 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:16.238 "assigned_rate_limits": { 00:09:16.238 "rw_ios_per_sec": 0, 00:09:16.238 "rw_mbytes_per_sec": 0, 00:09:16.238 "r_mbytes_per_sec": 0, 00:09:16.238 "w_mbytes_per_sec": 0 00:09:16.238 }, 00:09:16.238 "claimed": false, 00:09:16.238 "zoned": false, 00:09:16.238 "supported_io_types": { 00:09:16.238 "read": true, 00:09:16.238 "write": true, 00:09:16.238 "unmap": true, 00:09:16.238 "flush": true, 00:09:16.498 "reset": true, 00:09:16.498 "nvme_admin": false, 00:09:16.498 "nvme_io": false, 00:09:16.498 "nvme_io_md": false, 00:09:16.498 "write_zeroes": true, 00:09:16.498 "zcopy": true, 00:09:16.498 "get_zone_info": false, 00:09:16.498 "zone_management": false, 00:09:16.498 "zone_append": false, 00:09:16.498 "compare": false, 00:09:16.498 "compare_and_write": false, 00:09:16.498 "abort": true, 00:09:16.498 "seek_hole": false, 00:09:16.498 "seek_data": false, 00:09:16.498 "copy": true, 00:09:16.498 "nvme_iov_md": false 00:09:16.498 }, 00:09:16.498 "memory_domains": [ 00:09:16.498 { 00:09:16.498 "dma_device_id": "system", 00:09:16.498 "dma_device_type": 1 00:09:16.498 }, 00:09:16.498 { 00:09:16.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.498 "dma_device_type": 2 00:09:16.498 } 00:09:16.498 ], 00:09:16.498 "driver_specific": {} 00:09:16.498 } 00:09:16.498 ] 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.498 [2024-11-20 17:01:40.118788] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.498 [2024-11-20 17:01:40.118873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.498 [2024-11-20 17:01:40.118900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.498 [2024-11-20 17:01:40.121308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.498 "name": "Existed_Raid", 00:09:16.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.498 "strip_size_kb": 0, 00:09:16.498 "state": "configuring", 00:09:16.498 "raid_level": "raid1", 00:09:16.498 "superblock": false, 00:09:16.498 "num_base_bdevs": 3, 00:09:16.498 "num_base_bdevs_discovered": 2, 00:09:16.498 "num_base_bdevs_operational": 3, 00:09:16.498 "base_bdevs_list": [ 00:09:16.498 { 00:09:16.498 "name": "BaseBdev1", 00:09:16.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.498 "is_configured": false, 00:09:16.498 "data_offset": 0, 00:09:16.498 "data_size": 0 00:09:16.498 }, 00:09:16.498 { 00:09:16.498 "name": "BaseBdev2", 00:09:16.498 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:16.498 "is_configured": true, 00:09:16.498 "data_offset": 0, 00:09:16.498 "data_size": 65536 00:09:16.498 }, 00:09:16.498 { 00:09:16.498 "name": "BaseBdev3", 00:09:16.498 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:16.498 "is_configured": true, 00:09:16.498 "data_offset": 0, 00:09:16.498 "data_size": 65536 00:09:16.498 } 00:09:16.498 ] 00:09:16.498 }' 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.498 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.066 [2024-11-20 17:01:40.630999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.066 "name": "Existed_Raid", 00:09:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.066 "strip_size_kb": 0, 00:09:17.066 "state": "configuring", 00:09:17.066 "raid_level": "raid1", 00:09:17.066 "superblock": false, 00:09:17.066 "num_base_bdevs": 3, 00:09:17.066 "num_base_bdevs_discovered": 1, 00:09:17.066 "num_base_bdevs_operational": 3, 00:09:17.066 "base_bdevs_list": [ 00:09:17.066 { 00:09:17.066 "name": "BaseBdev1", 00:09:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.066 "is_configured": false, 00:09:17.066 "data_offset": 0, 00:09:17.066 "data_size": 0 00:09:17.066 }, 00:09:17.066 { 00:09:17.066 "name": null, 00:09:17.066 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:17.066 "is_configured": false, 00:09:17.066 "data_offset": 0, 00:09:17.066 "data_size": 65536 00:09:17.066 }, 00:09:17.066 { 00:09:17.066 "name": "BaseBdev3", 00:09:17.066 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:17.066 "is_configured": true, 00:09:17.066 "data_offset": 0, 00:09:17.066 "data_size": 65536 00:09:17.066 } 00:09:17.066 ] 00:09:17.066 }' 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.066 17:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.325 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.325 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.325 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 [2024-11-20 17:01:41.244117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.585 BaseBdev1 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 [ 00:09:17.585 { 00:09:17.585 "name": "BaseBdev1", 00:09:17.585 "aliases": [ 00:09:17.585 "c5223f80-5415-4cf8-91f3-dc6702539e9d" 00:09:17.585 ], 00:09:17.585 "product_name": "Malloc disk", 00:09:17.585 "block_size": 512, 00:09:17.585 "num_blocks": 65536, 00:09:17.585 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:17.585 "assigned_rate_limits": { 00:09:17.585 "rw_ios_per_sec": 0, 00:09:17.585 "rw_mbytes_per_sec": 0, 00:09:17.585 "r_mbytes_per_sec": 0, 00:09:17.585 "w_mbytes_per_sec": 0 00:09:17.585 }, 00:09:17.585 "claimed": true, 00:09:17.585 "claim_type": "exclusive_write", 00:09:17.585 "zoned": false, 00:09:17.585 "supported_io_types": { 00:09:17.585 "read": true, 00:09:17.585 "write": true, 00:09:17.585 "unmap": true, 00:09:17.585 "flush": true, 00:09:17.585 "reset": true, 00:09:17.585 "nvme_admin": false, 00:09:17.585 "nvme_io": false, 00:09:17.585 "nvme_io_md": false, 00:09:17.585 "write_zeroes": true, 00:09:17.585 "zcopy": true, 00:09:17.585 "get_zone_info": false, 00:09:17.585 "zone_management": false, 00:09:17.585 "zone_append": false, 00:09:17.585 "compare": false, 00:09:17.585 "compare_and_write": false, 00:09:17.585 "abort": true, 00:09:17.585 "seek_hole": false, 00:09:17.585 "seek_data": false, 00:09:17.585 "copy": true, 00:09:17.585 "nvme_iov_md": false 00:09:17.585 }, 00:09:17.585 "memory_domains": [ 00:09:17.585 { 00:09:17.585 "dma_device_id": "system", 00:09:17.585 "dma_device_type": 1 00:09:17.585 }, 00:09:17.585 { 00:09:17.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.585 "dma_device_type": 2 00:09:17.585 } 00:09:17.585 ], 00:09:17.585 "driver_specific": {} 00:09:17.585 } 00:09:17.585 ] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.585 "name": "Existed_Raid", 00:09:17.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.585 "strip_size_kb": 0, 00:09:17.585 "state": "configuring", 00:09:17.585 "raid_level": "raid1", 00:09:17.585 "superblock": false, 00:09:17.585 "num_base_bdevs": 3, 00:09:17.585 "num_base_bdevs_discovered": 2, 00:09:17.585 "num_base_bdevs_operational": 3, 00:09:17.585 "base_bdevs_list": [ 00:09:17.585 { 00:09:17.585 "name": "BaseBdev1", 00:09:17.585 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:17.585 "is_configured": true, 00:09:17.585 "data_offset": 0, 00:09:17.585 "data_size": 65536 00:09:17.585 }, 00:09:17.585 { 00:09:17.585 "name": null, 00:09:17.585 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:17.585 "is_configured": false, 00:09:17.585 "data_offset": 0, 00:09:17.585 "data_size": 65536 00:09:17.585 }, 00:09:17.585 { 00:09:17.585 "name": "BaseBdev3", 00:09:17.585 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:17.585 "is_configured": true, 00:09:17.585 "data_offset": 0, 00:09:17.585 "data_size": 65536 00:09:17.585 } 00:09:17.585 ] 00:09:17.585 }' 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.585 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.159 [2024-11-20 17:01:41.844392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.159 "name": "Existed_Raid", 00:09:18.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.159 "strip_size_kb": 0, 00:09:18.159 "state": "configuring", 00:09:18.159 "raid_level": "raid1", 00:09:18.159 "superblock": false, 00:09:18.159 "num_base_bdevs": 3, 00:09:18.159 "num_base_bdevs_discovered": 1, 00:09:18.159 "num_base_bdevs_operational": 3, 00:09:18.159 "base_bdevs_list": [ 00:09:18.159 { 00:09:18.159 "name": "BaseBdev1", 00:09:18.159 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:18.159 "is_configured": true, 00:09:18.159 "data_offset": 0, 00:09:18.159 "data_size": 65536 00:09:18.159 }, 00:09:18.159 { 00:09:18.159 "name": null, 00:09:18.159 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:18.159 "is_configured": false, 00:09:18.159 "data_offset": 0, 00:09:18.159 "data_size": 65536 00:09:18.159 }, 00:09:18.159 { 00:09:18.159 "name": null, 00:09:18.159 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:18.159 "is_configured": false, 00:09:18.159 "data_offset": 0, 00:09:18.159 "data_size": 65536 00:09:18.159 } 00:09:18.159 ] 00:09:18.159 }' 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.159 17:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.727 [2024-11-20 17:01:42.436752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.727 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.728 "name": "Existed_Raid", 00:09:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.728 "strip_size_kb": 0, 00:09:18.728 "state": "configuring", 00:09:18.728 "raid_level": "raid1", 00:09:18.728 "superblock": false, 00:09:18.728 "num_base_bdevs": 3, 00:09:18.728 "num_base_bdevs_discovered": 2, 00:09:18.728 "num_base_bdevs_operational": 3, 00:09:18.728 "base_bdevs_list": [ 00:09:18.728 { 00:09:18.728 "name": "BaseBdev1", 00:09:18.728 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:18.728 "is_configured": true, 00:09:18.728 "data_offset": 0, 00:09:18.728 "data_size": 65536 00:09:18.728 }, 00:09:18.728 { 00:09:18.728 "name": null, 00:09:18.728 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:18.728 "is_configured": false, 00:09:18.728 "data_offset": 0, 00:09:18.728 "data_size": 65536 00:09:18.728 }, 00:09:18.728 { 00:09:18.728 "name": "BaseBdev3", 00:09:18.728 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:18.728 "is_configured": true, 00:09:18.728 "data_offset": 0, 00:09:18.728 "data_size": 65536 00:09:18.728 } 00:09:18.728 ] 00:09:18.728 }' 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.728 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.311 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.311 17:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.311 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.311 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.311 17:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.311 [2024-11-20 17:01:43.009014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.311 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.311 "name": "Existed_Raid", 00:09:19.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.311 "strip_size_kb": 0, 00:09:19.311 "state": "configuring", 00:09:19.311 "raid_level": "raid1", 00:09:19.311 "superblock": false, 00:09:19.311 "num_base_bdevs": 3, 00:09:19.311 "num_base_bdevs_discovered": 1, 00:09:19.311 "num_base_bdevs_operational": 3, 00:09:19.311 "base_bdevs_list": [ 00:09:19.311 { 00:09:19.311 "name": null, 00:09:19.311 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:19.311 "is_configured": false, 00:09:19.311 "data_offset": 0, 00:09:19.311 "data_size": 65536 00:09:19.311 }, 00:09:19.311 { 00:09:19.311 "name": null, 00:09:19.311 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:19.311 "is_configured": false, 00:09:19.311 "data_offset": 0, 00:09:19.311 "data_size": 65536 00:09:19.312 }, 00:09:19.312 { 00:09:19.312 "name": "BaseBdev3", 00:09:19.312 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:19.312 "is_configured": true, 00:09:19.312 "data_offset": 0, 00:09:19.312 "data_size": 65536 00:09:19.312 } 00:09:19.312 ] 00:09:19.312 }' 00:09:19.312 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.312 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.881 [2024-11-20 17:01:43.684591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.881 "name": "Existed_Raid", 00:09:19.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.881 "strip_size_kb": 0, 00:09:19.881 "state": "configuring", 00:09:19.881 "raid_level": "raid1", 00:09:19.881 "superblock": false, 00:09:19.881 "num_base_bdevs": 3, 00:09:19.881 "num_base_bdevs_discovered": 2, 00:09:19.881 "num_base_bdevs_operational": 3, 00:09:19.881 "base_bdevs_list": [ 00:09:19.881 { 00:09:19.881 "name": null, 00:09:19.881 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:19.881 "is_configured": false, 00:09:19.881 "data_offset": 0, 00:09:19.881 "data_size": 65536 00:09:19.881 }, 00:09:19.881 { 00:09:19.881 "name": "BaseBdev2", 00:09:19.881 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:19.881 "is_configured": true, 00:09:19.881 "data_offset": 0, 00:09:19.881 "data_size": 65536 00:09:19.881 }, 00:09:19.881 { 00:09:19.881 "name": "BaseBdev3", 00:09:19.881 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:19.881 "is_configured": true, 00:09:19.881 "data_offset": 0, 00:09:19.881 "data_size": 65536 00:09:19.881 } 00:09:19.881 ] 00:09:19.881 }' 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.881 17:01:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c5223f80-5415-4cf8-91f3-dc6702539e9d 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.450 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.709 [2024-11-20 17:01:44.354669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.709 [2024-11-20 17:01:44.354736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:20.709 [2024-11-20 17:01:44.354747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:20.709 [2024-11-20 17:01:44.355140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:20.709 [2024-11-20 17:01:44.355344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:20.709 [2024-11-20 17:01:44.355365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:20.709 [2024-11-20 17:01:44.355673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.709 NewBaseBdev 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.709 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.709 [ 00:09:20.709 { 00:09:20.709 "name": "NewBaseBdev", 00:09:20.709 "aliases": [ 00:09:20.709 "c5223f80-5415-4cf8-91f3-dc6702539e9d" 00:09:20.709 ], 00:09:20.709 "product_name": "Malloc disk", 00:09:20.709 "block_size": 512, 00:09:20.709 "num_blocks": 65536, 00:09:20.709 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:20.709 "assigned_rate_limits": { 00:09:20.709 "rw_ios_per_sec": 0, 00:09:20.709 "rw_mbytes_per_sec": 0, 00:09:20.709 "r_mbytes_per_sec": 0, 00:09:20.709 "w_mbytes_per_sec": 0 00:09:20.709 }, 00:09:20.709 "claimed": true, 00:09:20.709 "claim_type": "exclusive_write", 00:09:20.709 "zoned": false, 00:09:20.709 "supported_io_types": { 00:09:20.709 "read": true, 00:09:20.709 "write": true, 00:09:20.709 "unmap": true, 00:09:20.709 "flush": true, 00:09:20.709 "reset": true, 00:09:20.709 "nvme_admin": false, 00:09:20.709 "nvme_io": false, 00:09:20.709 "nvme_io_md": false, 00:09:20.709 "write_zeroes": true, 00:09:20.709 "zcopy": true, 00:09:20.709 "get_zone_info": false, 00:09:20.709 "zone_management": false, 00:09:20.709 "zone_append": false, 00:09:20.709 "compare": false, 00:09:20.709 "compare_and_write": false, 00:09:20.709 "abort": true, 00:09:20.709 "seek_hole": false, 00:09:20.709 "seek_data": false, 00:09:20.709 "copy": true, 00:09:20.709 "nvme_iov_md": false 00:09:20.709 }, 00:09:20.709 "memory_domains": [ 00:09:20.709 { 00:09:20.709 "dma_device_id": "system", 00:09:20.710 "dma_device_type": 1 00:09:20.710 }, 00:09:20.710 { 00:09:20.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.710 "dma_device_type": 2 00:09:20.710 } 00:09:20.710 ], 00:09:20.710 "driver_specific": {} 00:09:20.710 } 00:09:20.710 ] 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.710 "name": "Existed_Raid", 00:09:20.710 "uuid": "d98d91a9-3290-4923-a375-718ea1536087", 00:09:20.710 "strip_size_kb": 0, 00:09:20.710 "state": "online", 00:09:20.710 "raid_level": "raid1", 00:09:20.710 "superblock": false, 00:09:20.710 "num_base_bdevs": 3, 00:09:20.710 "num_base_bdevs_discovered": 3, 00:09:20.710 "num_base_bdevs_operational": 3, 00:09:20.710 "base_bdevs_list": [ 00:09:20.710 { 00:09:20.710 "name": "NewBaseBdev", 00:09:20.710 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:20.710 "is_configured": true, 00:09:20.710 "data_offset": 0, 00:09:20.710 "data_size": 65536 00:09:20.710 }, 00:09:20.710 { 00:09:20.710 "name": "BaseBdev2", 00:09:20.710 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:20.710 "is_configured": true, 00:09:20.710 "data_offset": 0, 00:09:20.710 "data_size": 65536 00:09:20.710 }, 00:09:20.710 { 00:09:20.710 "name": "BaseBdev3", 00:09:20.710 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:20.710 "is_configured": true, 00:09:20.710 "data_offset": 0, 00:09:20.710 "data_size": 65536 00:09:20.710 } 00:09:20.710 ] 00:09:20.710 }' 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.710 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.279 [2024-11-20 17:01:44.919492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.279 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.279 "name": "Existed_Raid", 00:09:21.279 "aliases": [ 00:09:21.279 "d98d91a9-3290-4923-a375-718ea1536087" 00:09:21.279 ], 00:09:21.279 "product_name": "Raid Volume", 00:09:21.279 "block_size": 512, 00:09:21.279 "num_blocks": 65536, 00:09:21.279 "uuid": "d98d91a9-3290-4923-a375-718ea1536087", 00:09:21.279 "assigned_rate_limits": { 00:09:21.279 "rw_ios_per_sec": 0, 00:09:21.279 "rw_mbytes_per_sec": 0, 00:09:21.280 "r_mbytes_per_sec": 0, 00:09:21.280 "w_mbytes_per_sec": 0 00:09:21.280 }, 00:09:21.280 "claimed": false, 00:09:21.280 "zoned": false, 00:09:21.280 "supported_io_types": { 00:09:21.280 "read": true, 00:09:21.280 "write": true, 00:09:21.280 "unmap": false, 00:09:21.280 "flush": false, 00:09:21.280 "reset": true, 00:09:21.280 "nvme_admin": false, 00:09:21.280 "nvme_io": false, 00:09:21.280 "nvme_io_md": false, 00:09:21.280 "write_zeroes": true, 00:09:21.280 "zcopy": false, 00:09:21.280 "get_zone_info": false, 00:09:21.280 "zone_management": false, 00:09:21.280 "zone_append": false, 00:09:21.280 "compare": false, 00:09:21.280 "compare_and_write": false, 00:09:21.280 "abort": false, 00:09:21.280 "seek_hole": false, 00:09:21.280 "seek_data": false, 00:09:21.280 "copy": false, 00:09:21.280 "nvme_iov_md": false 00:09:21.280 }, 00:09:21.280 "memory_domains": [ 00:09:21.280 { 00:09:21.280 "dma_device_id": "system", 00:09:21.280 "dma_device_type": 1 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.280 "dma_device_type": 2 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "dma_device_id": "system", 00:09:21.280 "dma_device_type": 1 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.280 "dma_device_type": 2 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "dma_device_id": "system", 00:09:21.280 "dma_device_type": 1 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.280 "dma_device_type": 2 00:09:21.280 } 00:09:21.280 ], 00:09:21.280 "driver_specific": { 00:09:21.280 "raid": { 00:09:21.280 "uuid": "d98d91a9-3290-4923-a375-718ea1536087", 00:09:21.280 "strip_size_kb": 0, 00:09:21.280 "state": "online", 00:09:21.280 "raid_level": "raid1", 00:09:21.280 "superblock": false, 00:09:21.280 "num_base_bdevs": 3, 00:09:21.280 "num_base_bdevs_discovered": 3, 00:09:21.280 "num_base_bdevs_operational": 3, 00:09:21.280 "base_bdevs_list": [ 00:09:21.280 { 00:09:21.280 "name": "NewBaseBdev", 00:09:21.280 "uuid": "c5223f80-5415-4cf8-91f3-dc6702539e9d", 00:09:21.280 "is_configured": true, 00:09:21.280 "data_offset": 0, 00:09:21.280 "data_size": 65536 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "name": "BaseBdev2", 00:09:21.280 "uuid": "cfad85a7-8345-4018-95ec-a0803004821e", 00:09:21.280 "is_configured": true, 00:09:21.280 "data_offset": 0, 00:09:21.280 "data_size": 65536 00:09:21.280 }, 00:09:21.280 { 00:09:21.280 "name": "BaseBdev3", 00:09:21.280 "uuid": "ac687855-f7f4-4d37-a2d6-5dfc32c73203", 00:09:21.280 "is_configured": true, 00:09:21.280 "data_offset": 0, 00:09:21.280 "data_size": 65536 00:09:21.280 } 00:09:21.280 ] 00:09:21.280 } 00:09:21.280 } 00:09:21.280 }' 00:09:21.280 17:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.280 BaseBdev2 00:09:21.280 BaseBdev3' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.280 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.540 [2024-11-20 17:01:45.239083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.540 [2024-11-20 17:01:45.239166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.540 [2024-11-20 17:01:45.239256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.540 [2024-11-20 17:01:45.239828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.540 [2024-11-20 17:01:45.239848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67256 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67256 ']' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67256 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67256 00:09:21.540 killing process with pid 67256 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67256' 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67256 00:09:21.540 [2024-11-20 17:01:45.281549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.540 17:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67256 00:09:21.799 [2024-11-20 17:01:45.561517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.737 ************************************ 00:09:22.737 END TEST raid_state_function_test 00:09:22.737 ************************************ 00:09:22.737 17:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.737 00:09:22.737 real 0m11.964s 00:09:22.737 user 0m19.902s 00:09:22.737 sys 0m1.618s 00:09:22.737 17:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.737 17:01:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.997 17:01:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:22.997 17:01:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.997 17:01:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.997 17:01:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.997 ************************************ 00:09:22.997 START TEST raid_state_function_test_sb 00:09:22.997 ************************************ 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.997 Process raid pid: 67895 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67895 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67895' 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67895 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67895 ']' 00:09:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.997 17:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.997 [2024-11-20 17:01:46.750154] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:22.997 [2024-11-20 17:01:46.750313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.257 [2024-11-20 17:01:46.923220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.257 [2024-11-20 17:01:47.054282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.516 [2024-11-20 17:01:47.257423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.516 [2024-11-20 17:01:47.257470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.084 [2024-11-20 17:01:47.789056] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.084 [2024-11-20 17:01:47.789134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.084 [2024-11-20 17:01:47.789151] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.084 [2024-11-20 17:01:47.789166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.084 [2024-11-20 17:01:47.789176] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.084 [2024-11-20 17:01:47.789189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.084 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.084 "name": "Existed_Raid", 00:09:24.084 "uuid": "649d4be4-5977-45f3-ac47-6f61ee93bb93", 00:09:24.084 "strip_size_kb": 0, 00:09:24.084 "state": "configuring", 00:09:24.084 "raid_level": "raid1", 00:09:24.084 "superblock": true, 00:09:24.084 "num_base_bdevs": 3, 00:09:24.084 "num_base_bdevs_discovered": 0, 00:09:24.084 "num_base_bdevs_operational": 3, 00:09:24.084 "base_bdevs_list": [ 00:09:24.084 { 00:09:24.084 "name": "BaseBdev1", 00:09:24.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.084 "is_configured": false, 00:09:24.084 "data_offset": 0, 00:09:24.084 "data_size": 0 00:09:24.084 }, 00:09:24.084 { 00:09:24.084 "name": "BaseBdev2", 00:09:24.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.084 "is_configured": false, 00:09:24.084 "data_offset": 0, 00:09:24.084 "data_size": 0 00:09:24.084 }, 00:09:24.084 { 00:09:24.084 "name": "BaseBdev3", 00:09:24.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.084 "is_configured": false, 00:09:24.084 "data_offset": 0, 00:09:24.084 "data_size": 0 00:09:24.084 } 00:09:24.084 ] 00:09:24.084 }' 00:09:24.085 17:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.085 17:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 [2024-11-20 17:01:48.313166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.653 [2024-11-20 17:01:48.313389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 [2024-11-20 17:01:48.321156] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.653 [2024-11-20 17:01:48.321222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.653 [2024-11-20 17:01:48.321238] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.653 [2024-11-20 17:01:48.321254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.653 [2024-11-20 17:01:48.321263] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.653 [2024-11-20 17:01:48.321276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 [2024-11-20 17:01:48.365679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.653 BaseBdev1 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.653 [ 00:09:24.653 { 00:09:24.653 "name": "BaseBdev1", 00:09:24.653 "aliases": [ 00:09:24.653 "7e41524c-68b0-41ee-a2dd-67f2afde9192" 00:09:24.653 ], 00:09:24.653 "product_name": "Malloc disk", 00:09:24.653 "block_size": 512, 00:09:24.653 "num_blocks": 65536, 00:09:24.653 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:24.653 "assigned_rate_limits": { 00:09:24.653 "rw_ios_per_sec": 0, 00:09:24.653 "rw_mbytes_per_sec": 0, 00:09:24.653 "r_mbytes_per_sec": 0, 00:09:24.653 "w_mbytes_per_sec": 0 00:09:24.653 }, 00:09:24.653 "claimed": true, 00:09:24.653 "claim_type": "exclusive_write", 00:09:24.653 "zoned": false, 00:09:24.653 "supported_io_types": { 00:09:24.653 "read": true, 00:09:24.653 "write": true, 00:09:24.653 "unmap": true, 00:09:24.653 "flush": true, 00:09:24.653 "reset": true, 00:09:24.653 "nvme_admin": false, 00:09:24.653 "nvme_io": false, 00:09:24.653 "nvme_io_md": false, 00:09:24.653 "write_zeroes": true, 00:09:24.653 "zcopy": true, 00:09:24.653 "get_zone_info": false, 00:09:24.653 "zone_management": false, 00:09:24.653 "zone_append": false, 00:09:24.653 "compare": false, 00:09:24.653 "compare_and_write": false, 00:09:24.653 "abort": true, 00:09:24.653 "seek_hole": false, 00:09:24.653 "seek_data": false, 00:09:24.653 "copy": true, 00:09:24.653 "nvme_iov_md": false 00:09:24.653 }, 00:09:24.653 "memory_domains": [ 00:09:24.653 { 00:09:24.653 "dma_device_id": "system", 00:09:24.653 "dma_device_type": 1 00:09:24.653 }, 00:09:24.653 { 00:09:24.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.653 "dma_device_type": 2 00:09:24.653 } 00:09:24.653 ], 00:09:24.653 "driver_specific": {} 00:09:24.653 } 00:09:24.653 ] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.653 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.654 "name": "Existed_Raid", 00:09:24.654 "uuid": "23261dfa-4b0f-44db-982c-448f79165683", 00:09:24.654 "strip_size_kb": 0, 00:09:24.654 "state": "configuring", 00:09:24.654 "raid_level": "raid1", 00:09:24.654 "superblock": true, 00:09:24.654 "num_base_bdevs": 3, 00:09:24.654 "num_base_bdevs_discovered": 1, 00:09:24.654 "num_base_bdevs_operational": 3, 00:09:24.654 "base_bdevs_list": [ 00:09:24.654 { 00:09:24.654 "name": "BaseBdev1", 00:09:24.654 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:24.654 "is_configured": true, 00:09:24.654 "data_offset": 2048, 00:09:24.654 "data_size": 63488 00:09:24.654 }, 00:09:24.654 { 00:09:24.654 "name": "BaseBdev2", 00:09:24.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.654 "is_configured": false, 00:09:24.654 "data_offset": 0, 00:09:24.654 "data_size": 0 00:09:24.654 }, 00:09:24.654 { 00:09:24.654 "name": "BaseBdev3", 00:09:24.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.654 "is_configured": false, 00:09:24.654 "data_offset": 0, 00:09:24.654 "data_size": 0 00:09:24.654 } 00:09:24.654 ] 00:09:24.654 }' 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.654 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.296 [2024-11-20 17:01:48.929927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.296 [2024-11-20 17:01:48.929991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.296 [2024-11-20 17:01:48.937954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.296 [2024-11-20 17:01:48.940490] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.296 [2024-11-20 17:01:48.940718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.296 [2024-11-20 17:01:48.940746] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.296 [2024-11-20 17:01:48.940763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.296 "name": "Existed_Raid", 00:09:25.296 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:25.296 "strip_size_kb": 0, 00:09:25.296 "state": "configuring", 00:09:25.296 "raid_level": "raid1", 00:09:25.296 "superblock": true, 00:09:25.296 "num_base_bdevs": 3, 00:09:25.296 "num_base_bdevs_discovered": 1, 00:09:25.296 "num_base_bdevs_operational": 3, 00:09:25.296 "base_bdevs_list": [ 00:09:25.296 { 00:09:25.296 "name": "BaseBdev1", 00:09:25.296 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:25.296 "is_configured": true, 00:09:25.296 "data_offset": 2048, 00:09:25.296 "data_size": 63488 00:09:25.296 }, 00:09:25.296 { 00:09:25.296 "name": "BaseBdev2", 00:09:25.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.296 "is_configured": false, 00:09:25.296 "data_offset": 0, 00:09:25.296 "data_size": 0 00:09:25.296 }, 00:09:25.296 { 00:09:25.296 "name": "BaseBdev3", 00:09:25.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.296 "is_configured": false, 00:09:25.296 "data_offset": 0, 00:09:25.296 "data_size": 0 00:09:25.296 } 00:09:25.296 ] 00:09:25.296 }' 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.296 17:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.862 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.862 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.862 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 [2024-11-20 17:01:49.500043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.863 BaseBdev2 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 [ 00:09:25.863 { 00:09:25.863 "name": "BaseBdev2", 00:09:25.863 "aliases": [ 00:09:25.863 "2a32faa3-906c-481c-9607-508f53801360" 00:09:25.863 ], 00:09:25.863 "product_name": "Malloc disk", 00:09:25.863 "block_size": 512, 00:09:25.863 "num_blocks": 65536, 00:09:25.863 "uuid": "2a32faa3-906c-481c-9607-508f53801360", 00:09:25.863 "assigned_rate_limits": { 00:09:25.863 "rw_ios_per_sec": 0, 00:09:25.863 "rw_mbytes_per_sec": 0, 00:09:25.863 "r_mbytes_per_sec": 0, 00:09:25.863 "w_mbytes_per_sec": 0 00:09:25.863 }, 00:09:25.863 "claimed": true, 00:09:25.863 "claim_type": "exclusive_write", 00:09:25.863 "zoned": false, 00:09:25.863 "supported_io_types": { 00:09:25.863 "read": true, 00:09:25.863 "write": true, 00:09:25.863 "unmap": true, 00:09:25.863 "flush": true, 00:09:25.863 "reset": true, 00:09:25.863 "nvme_admin": false, 00:09:25.863 "nvme_io": false, 00:09:25.863 "nvme_io_md": false, 00:09:25.863 "write_zeroes": true, 00:09:25.863 "zcopy": true, 00:09:25.863 "get_zone_info": false, 00:09:25.863 "zone_management": false, 00:09:25.863 "zone_append": false, 00:09:25.863 "compare": false, 00:09:25.863 "compare_and_write": false, 00:09:25.863 "abort": true, 00:09:25.863 "seek_hole": false, 00:09:25.863 "seek_data": false, 00:09:25.863 "copy": true, 00:09:25.863 "nvme_iov_md": false 00:09:25.863 }, 00:09:25.863 "memory_domains": [ 00:09:25.863 { 00:09:25.863 "dma_device_id": "system", 00:09:25.863 "dma_device_type": 1 00:09:25.863 }, 00:09:25.863 { 00:09:25.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.863 "dma_device_type": 2 00:09:25.863 } 00:09:25.863 ], 00:09:25.863 "driver_specific": {} 00:09:25.863 } 00:09:25.863 ] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.863 "name": "Existed_Raid", 00:09:25.863 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:25.863 "strip_size_kb": 0, 00:09:25.863 "state": "configuring", 00:09:25.863 "raid_level": "raid1", 00:09:25.863 "superblock": true, 00:09:25.863 "num_base_bdevs": 3, 00:09:25.863 "num_base_bdevs_discovered": 2, 00:09:25.863 "num_base_bdevs_operational": 3, 00:09:25.863 "base_bdevs_list": [ 00:09:25.863 { 00:09:25.863 "name": "BaseBdev1", 00:09:25.863 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:25.863 "is_configured": true, 00:09:25.863 "data_offset": 2048, 00:09:25.863 "data_size": 63488 00:09:25.863 }, 00:09:25.863 { 00:09:25.863 "name": "BaseBdev2", 00:09:25.863 "uuid": "2a32faa3-906c-481c-9607-508f53801360", 00:09:25.863 "is_configured": true, 00:09:25.863 "data_offset": 2048, 00:09:25.863 "data_size": 63488 00:09:25.863 }, 00:09:25.863 { 00:09:25.863 "name": "BaseBdev3", 00:09:25.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.863 "is_configured": false, 00:09:25.863 "data_offset": 0, 00:09:25.863 "data_size": 0 00:09:25.863 } 00:09:25.863 ] 00:09:25.863 }' 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.863 17:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.430 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.430 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 [2024-11-20 17:01:50.116435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.431 [2024-11-20 17:01:50.117067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.431 [2024-11-20 17:01:50.117102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.431 BaseBdev3 00:09:26.431 [2024-11-20 17:01:50.117499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.431 [2024-11-20 17:01:50.117738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.431 [2024-11-20 17:01:50.117761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:26.431 [2024-11-20 17:01:50.117968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.431 [ 00:09:26.431 { 00:09:26.431 "name": "BaseBdev3", 00:09:26.431 "aliases": [ 00:09:26.431 "034d8888-100a-4442-a863-7f99be3061ac" 00:09:26.431 ], 00:09:26.431 "product_name": "Malloc disk", 00:09:26.431 "block_size": 512, 00:09:26.431 "num_blocks": 65536, 00:09:26.431 "uuid": "034d8888-100a-4442-a863-7f99be3061ac", 00:09:26.431 "assigned_rate_limits": { 00:09:26.431 "rw_ios_per_sec": 0, 00:09:26.431 "rw_mbytes_per_sec": 0, 00:09:26.431 "r_mbytes_per_sec": 0, 00:09:26.431 "w_mbytes_per_sec": 0 00:09:26.431 }, 00:09:26.431 "claimed": true, 00:09:26.431 "claim_type": "exclusive_write", 00:09:26.431 "zoned": false, 00:09:26.431 "supported_io_types": { 00:09:26.431 "read": true, 00:09:26.431 "write": true, 00:09:26.431 "unmap": true, 00:09:26.431 "flush": true, 00:09:26.431 "reset": true, 00:09:26.431 "nvme_admin": false, 00:09:26.431 "nvme_io": false, 00:09:26.431 "nvme_io_md": false, 00:09:26.431 "write_zeroes": true, 00:09:26.431 "zcopy": true, 00:09:26.431 "get_zone_info": false, 00:09:26.431 "zone_management": false, 00:09:26.431 "zone_append": false, 00:09:26.431 "compare": false, 00:09:26.431 "compare_and_write": false, 00:09:26.431 "abort": true, 00:09:26.431 "seek_hole": false, 00:09:26.431 "seek_data": false, 00:09:26.431 "copy": true, 00:09:26.431 "nvme_iov_md": false 00:09:26.431 }, 00:09:26.431 "memory_domains": [ 00:09:26.431 { 00:09:26.431 "dma_device_id": "system", 00:09:26.431 "dma_device_type": 1 00:09:26.431 }, 00:09:26.431 { 00:09:26.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.431 "dma_device_type": 2 00:09:26.431 } 00:09:26.431 ], 00:09:26.431 "driver_specific": {} 00:09:26.431 } 00:09:26.431 ] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.431 "name": "Existed_Raid", 00:09:26.431 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:26.431 "strip_size_kb": 0, 00:09:26.431 "state": "online", 00:09:26.431 "raid_level": "raid1", 00:09:26.431 "superblock": true, 00:09:26.431 "num_base_bdevs": 3, 00:09:26.431 "num_base_bdevs_discovered": 3, 00:09:26.431 "num_base_bdevs_operational": 3, 00:09:26.431 "base_bdevs_list": [ 00:09:26.431 { 00:09:26.431 "name": "BaseBdev1", 00:09:26.431 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:26.431 "is_configured": true, 00:09:26.431 "data_offset": 2048, 00:09:26.431 "data_size": 63488 00:09:26.431 }, 00:09:26.431 { 00:09:26.431 "name": "BaseBdev2", 00:09:26.431 "uuid": "2a32faa3-906c-481c-9607-508f53801360", 00:09:26.431 "is_configured": true, 00:09:26.431 "data_offset": 2048, 00:09:26.431 "data_size": 63488 00:09:26.431 }, 00:09:26.431 { 00:09:26.431 "name": "BaseBdev3", 00:09:26.431 "uuid": "034d8888-100a-4442-a863-7f99be3061ac", 00:09:26.431 "is_configured": true, 00:09:26.431 "data_offset": 2048, 00:09:26.431 "data_size": 63488 00:09:26.431 } 00:09:26.431 ] 00:09:26.431 }' 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.431 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.999 [2024-11-20 17:01:50.697069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.999 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.999 "name": "Existed_Raid", 00:09:26.999 "aliases": [ 00:09:26.999 "5030c904-7ca9-40ce-ae11-7b0eb302e5c6" 00:09:26.999 ], 00:09:26.999 "product_name": "Raid Volume", 00:09:26.999 "block_size": 512, 00:09:26.999 "num_blocks": 63488, 00:09:26.999 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:26.999 "assigned_rate_limits": { 00:09:26.999 "rw_ios_per_sec": 0, 00:09:26.999 "rw_mbytes_per_sec": 0, 00:09:26.999 "r_mbytes_per_sec": 0, 00:09:26.999 "w_mbytes_per_sec": 0 00:09:26.999 }, 00:09:26.999 "claimed": false, 00:09:26.999 "zoned": false, 00:09:26.999 "supported_io_types": { 00:09:26.999 "read": true, 00:09:26.999 "write": true, 00:09:26.999 "unmap": false, 00:09:26.999 "flush": false, 00:09:26.999 "reset": true, 00:09:26.999 "nvme_admin": false, 00:09:26.999 "nvme_io": false, 00:09:26.999 "nvme_io_md": false, 00:09:26.999 "write_zeroes": true, 00:09:26.999 "zcopy": false, 00:09:26.999 "get_zone_info": false, 00:09:26.999 "zone_management": false, 00:09:26.999 "zone_append": false, 00:09:26.999 "compare": false, 00:09:26.999 "compare_and_write": false, 00:09:26.999 "abort": false, 00:09:26.999 "seek_hole": false, 00:09:26.999 "seek_data": false, 00:09:27.000 "copy": false, 00:09:27.000 "nvme_iov_md": false 00:09:27.000 }, 00:09:27.000 "memory_domains": [ 00:09:27.000 { 00:09:27.000 "dma_device_id": "system", 00:09:27.000 "dma_device_type": 1 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.000 "dma_device_type": 2 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "dma_device_id": "system", 00:09:27.000 "dma_device_type": 1 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.000 "dma_device_type": 2 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "dma_device_id": "system", 00:09:27.000 "dma_device_type": 1 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.000 "dma_device_type": 2 00:09:27.000 } 00:09:27.000 ], 00:09:27.000 "driver_specific": { 00:09:27.000 "raid": { 00:09:27.000 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:27.000 "strip_size_kb": 0, 00:09:27.000 "state": "online", 00:09:27.000 "raid_level": "raid1", 00:09:27.000 "superblock": true, 00:09:27.000 "num_base_bdevs": 3, 00:09:27.000 "num_base_bdevs_discovered": 3, 00:09:27.000 "num_base_bdevs_operational": 3, 00:09:27.000 "base_bdevs_list": [ 00:09:27.000 { 00:09:27.000 "name": "BaseBdev1", 00:09:27.000 "uuid": "7e41524c-68b0-41ee-a2dd-67f2afde9192", 00:09:27.000 "is_configured": true, 00:09:27.000 "data_offset": 2048, 00:09:27.000 "data_size": 63488 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "name": "BaseBdev2", 00:09:27.000 "uuid": "2a32faa3-906c-481c-9607-508f53801360", 00:09:27.000 "is_configured": true, 00:09:27.000 "data_offset": 2048, 00:09:27.000 "data_size": 63488 00:09:27.000 }, 00:09:27.000 { 00:09:27.000 "name": "BaseBdev3", 00:09:27.000 "uuid": "034d8888-100a-4442-a863-7f99be3061ac", 00:09:27.000 "is_configured": true, 00:09:27.000 "data_offset": 2048, 00:09:27.000 "data_size": 63488 00:09:27.000 } 00:09:27.000 ] 00:09:27.000 } 00:09:27.000 } 00:09:27.000 }' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:27.000 BaseBdev2 00:09:27.000 BaseBdev3' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.000 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.259 17:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.259 [2024-11-20 17:01:51.020827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.259 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.517 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.517 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.518 "name": "Existed_Raid", 00:09:27.518 "uuid": "5030c904-7ca9-40ce-ae11-7b0eb302e5c6", 00:09:27.518 "strip_size_kb": 0, 00:09:27.518 "state": "online", 00:09:27.518 "raid_level": "raid1", 00:09:27.518 "superblock": true, 00:09:27.518 "num_base_bdevs": 3, 00:09:27.518 "num_base_bdevs_discovered": 2, 00:09:27.518 "num_base_bdevs_operational": 2, 00:09:27.518 "base_bdevs_list": [ 00:09:27.518 { 00:09:27.518 "name": null, 00:09:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.518 "is_configured": false, 00:09:27.518 "data_offset": 0, 00:09:27.518 "data_size": 63488 00:09:27.518 }, 00:09:27.518 { 00:09:27.518 "name": "BaseBdev2", 00:09:27.518 "uuid": "2a32faa3-906c-481c-9607-508f53801360", 00:09:27.518 "is_configured": true, 00:09:27.518 "data_offset": 2048, 00:09:27.518 "data_size": 63488 00:09:27.518 }, 00:09:27.518 { 00:09:27.518 "name": "BaseBdev3", 00:09:27.518 "uuid": "034d8888-100a-4442-a863-7f99be3061ac", 00:09:27.518 "is_configured": true, 00:09:27.518 "data_offset": 2048, 00:09:27.518 "data_size": 63488 00:09:27.518 } 00:09:27.518 ] 00:09:27.518 }' 00:09:27.518 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.518 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.776 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.035 [2024-11-20 17:01:51.687207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.035 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.035 [2024-11-20 17:01:51.822451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.036 [2024-11-20 17:01:51.822574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.296 [2024-11-20 17:01:51.905008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.296 [2024-11-20 17:01:51.905067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.296 [2024-11-20 17:01:51.905087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.296 17:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.296 BaseBdev2 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.296 [ 00:09:28.296 { 00:09:28.296 "name": "BaseBdev2", 00:09:28.296 "aliases": [ 00:09:28.296 "87ebe4cb-9e68-4fab-9519-902e7e2aaf64" 00:09:28.296 ], 00:09:28.296 "product_name": "Malloc disk", 00:09:28.296 "block_size": 512, 00:09:28.296 "num_blocks": 65536, 00:09:28.296 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:28.296 "assigned_rate_limits": { 00:09:28.296 "rw_ios_per_sec": 0, 00:09:28.296 "rw_mbytes_per_sec": 0, 00:09:28.296 "r_mbytes_per_sec": 0, 00:09:28.296 "w_mbytes_per_sec": 0 00:09:28.296 }, 00:09:28.296 "claimed": false, 00:09:28.296 "zoned": false, 00:09:28.296 "supported_io_types": { 00:09:28.296 "read": true, 00:09:28.296 "write": true, 00:09:28.296 "unmap": true, 00:09:28.296 "flush": true, 00:09:28.296 "reset": true, 00:09:28.296 "nvme_admin": false, 00:09:28.296 "nvme_io": false, 00:09:28.296 "nvme_io_md": false, 00:09:28.296 "write_zeroes": true, 00:09:28.296 "zcopy": true, 00:09:28.296 "get_zone_info": false, 00:09:28.296 "zone_management": false, 00:09:28.296 "zone_append": false, 00:09:28.296 "compare": false, 00:09:28.296 "compare_and_write": false, 00:09:28.296 "abort": true, 00:09:28.296 "seek_hole": false, 00:09:28.296 "seek_data": false, 00:09:28.296 "copy": true, 00:09:28.296 "nvme_iov_md": false 00:09:28.296 }, 00:09:28.296 "memory_domains": [ 00:09:28.296 { 00:09:28.296 "dma_device_id": "system", 00:09:28.296 "dma_device_type": 1 00:09:28.296 }, 00:09:28.296 { 00:09:28.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.296 "dma_device_type": 2 00:09:28.296 } 00:09:28.296 ], 00:09:28.296 "driver_specific": {} 00:09:28.296 } 00:09:28.296 ] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.296 BaseBdev3 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.296 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.297 [ 00:09:28.297 { 00:09:28.297 "name": "BaseBdev3", 00:09:28.297 "aliases": [ 00:09:28.297 "5aeda417-3465-4dcb-b2e2-0170e812164f" 00:09:28.297 ], 00:09:28.297 "product_name": "Malloc disk", 00:09:28.297 "block_size": 512, 00:09:28.297 "num_blocks": 65536, 00:09:28.297 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:28.297 "assigned_rate_limits": { 00:09:28.297 "rw_ios_per_sec": 0, 00:09:28.297 "rw_mbytes_per_sec": 0, 00:09:28.297 "r_mbytes_per_sec": 0, 00:09:28.297 "w_mbytes_per_sec": 0 00:09:28.297 }, 00:09:28.297 "claimed": false, 00:09:28.297 "zoned": false, 00:09:28.297 "supported_io_types": { 00:09:28.297 "read": true, 00:09:28.297 "write": true, 00:09:28.297 "unmap": true, 00:09:28.297 "flush": true, 00:09:28.297 "reset": true, 00:09:28.297 "nvme_admin": false, 00:09:28.297 "nvme_io": false, 00:09:28.297 "nvme_io_md": false, 00:09:28.297 "write_zeroes": true, 00:09:28.297 "zcopy": true, 00:09:28.297 "get_zone_info": false, 00:09:28.297 "zone_management": false, 00:09:28.297 "zone_append": false, 00:09:28.297 "compare": false, 00:09:28.297 "compare_and_write": false, 00:09:28.297 "abort": true, 00:09:28.297 "seek_hole": false, 00:09:28.297 "seek_data": false, 00:09:28.297 "copy": true, 00:09:28.297 "nvme_iov_md": false 00:09:28.297 }, 00:09:28.297 "memory_domains": [ 00:09:28.297 { 00:09:28.297 "dma_device_id": "system", 00:09:28.297 "dma_device_type": 1 00:09:28.297 }, 00:09:28.297 { 00:09:28.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.297 "dma_device_type": 2 00:09:28.297 } 00:09:28.297 ], 00:09:28.297 "driver_specific": {} 00:09:28.297 } 00:09:28.297 ] 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.297 [2024-11-20 17:01:52.119626] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.297 [2024-11-20 17:01:52.119714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.297 [2024-11-20 17:01:52.119752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.297 [2024-11-20 17:01:52.122195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.297 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.557 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.557 "name": "Existed_Raid", 00:09:28.557 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:28.557 "strip_size_kb": 0, 00:09:28.557 "state": "configuring", 00:09:28.557 "raid_level": "raid1", 00:09:28.557 "superblock": true, 00:09:28.557 "num_base_bdevs": 3, 00:09:28.557 "num_base_bdevs_discovered": 2, 00:09:28.557 "num_base_bdevs_operational": 3, 00:09:28.557 "base_bdevs_list": [ 00:09:28.557 { 00:09:28.557 "name": "BaseBdev1", 00:09:28.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.557 "is_configured": false, 00:09:28.557 "data_offset": 0, 00:09:28.557 "data_size": 0 00:09:28.557 }, 00:09:28.557 { 00:09:28.557 "name": "BaseBdev2", 00:09:28.557 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:28.557 "is_configured": true, 00:09:28.557 "data_offset": 2048, 00:09:28.557 "data_size": 63488 00:09:28.557 }, 00:09:28.557 { 00:09:28.557 "name": "BaseBdev3", 00:09:28.557 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:28.557 "is_configured": true, 00:09:28.557 "data_offset": 2048, 00:09:28.557 "data_size": 63488 00:09:28.557 } 00:09:28.557 ] 00:09:28.557 }' 00:09:28.557 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.557 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.816 [2024-11-20 17:01:52.631885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.816 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.075 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.075 "name": "Existed_Raid", 00:09:29.075 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:29.075 "strip_size_kb": 0, 00:09:29.075 "state": "configuring", 00:09:29.075 "raid_level": "raid1", 00:09:29.075 "superblock": true, 00:09:29.075 "num_base_bdevs": 3, 00:09:29.075 "num_base_bdevs_discovered": 1, 00:09:29.075 "num_base_bdevs_operational": 3, 00:09:29.075 "base_bdevs_list": [ 00:09:29.075 { 00:09:29.075 "name": "BaseBdev1", 00:09:29.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.075 "is_configured": false, 00:09:29.075 "data_offset": 0, 00:09:29.075 "data_size": 0 00:09:29.075 }, 00:09:29.075 { 00:09:29.075 "name": null, 00:09:29.075 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:29.075 "is_configured": false, 00:09:29.075 "data_offset": 0, 00:09:29.075 "data_size": 63488 00:09:29.075 }, 00:09:29.075 { 00:09:29.075 "name": "BaseBdev3", 00:09:29.075 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:29.075 "is_configured": true, 00:09:29.075 "data_offset": 2048, 00:09:29.075 "data_size": 63488 00:09:29.075 } 00:09:29.075 ] 00:09:29.075 }' 00:09:29.075 17:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.075 17:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.334 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.334 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.334 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.334 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.334 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.594 [2024-11-20 17:01:53.258438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.594 BaseBdev1 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.594 [ 00:09:29.594 { 00:09:29.594 "name": "BaseBdev1", 00:09:29.594 "aliases": [ 00:09:29.594 "07f004ff-98fe-456f-b274-6dedbb20affa" 00:09:29.594 ], 00:09:29.594 "product_name": "Malloc disk", 00:09:29.594 "block_size": 512, 00:09:29.594 "num_blocks": 65536, 00:09:29.594 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:29.594 "assigned_rate_limits": { 00:09:29.594 "rw_ios_per_sec": 0, 00:09:29.594 "rw_mbytes_per_sec": 0, 00:09:29.594 "r_mbytes_per_sec": 0, 00:09:29.594 "w_mbytes_per_sec": 0 00:09:29.594 }, 00:09:29.594 "claimed": true, 00:09:29.594 "claim_type": "exclusive_write", 00:09:29.594 "zoned": false, 00:09:29.594 "supported_io_types": { 00:09:29.594 "read": true, 00:09:29.594 "write": true, 00:09:29.594 "unmap": true, 00:09:29.594 "flush": true, 00:09:29.594 "reset": true, 00:09:29.594 "nvme_admin": false, 00:09:29.594 "nvme_io": false, 00:09:29.594 "nvme_io_md": false, 00:09:29.594 "write_zeroes": true, 00:09:29.594 "zcopy": true, 00:09:29.594 "get_zone_info": false, 00:09:29.594 "zone_management": false, 00:09:29.594 "zone_append": false, 00:09:29.594 "compare": false, 00:09:29.594 "compare_and_write": false, 00:09:29.594 "abort": true, 00:09:29.594 "seek_hole": false, 00:09:29.594 "seek_data": false, 00:09:29.594 "copy": true, 00:09:29.594 "nvme_iov_md": false 00:09:29.594 }, 00:09:29.594 "memory_domains": [ 00:09:29.594 { 00:09:29.594 "dma_device_id": "system", 00:09:29.594 "dma_device_type": 1 00:09:29.594 }, 00:09:29.594 { 00:09:29.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.594 "dma_device_type": 2 00:09:29.594 } 00:09:29.594 ], 00:09:29.594 "driver_specific": {} 00:09:29.594 } 00:09:29.594 ] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.594 "name": "Existed_Raid", 00:09:29.594 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:29.594 "strip_size_kb": 0, 00:09:29.594 "state": "configuring", 00:09:29.594 "raid_level": "raid1", 00:09:29.594 "superblock": true, 00:09:29.594 "num_base_bdevs": 3, 00:09:29.594 "num_base_bdevs_discovered": 2, 00:09:29.594 "num_base_bdevs_operational": 3, 00:09:29.594 "base_bdevs_list": [ 00:09:29.594 { 00:09:29.594 "name": "BaseBdev1", 00:09:29.594 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:29.594 "is_configured": true, 00:09:29.594 "data_offset": 2048, 00:09:29.594 "data_size": 63488 00:09:29.594 }, 00:09:29.594 { 00:09:29.594 "name": null, 00:09:29.594 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:29.594 "is_configured": false, 00:09:29.594 "data_offset": 0, 00:09:29.594 "data_size": 63488 00:09:29.594 }, 00:09:29.594 { 00:09:29.594 "name": "BaseBdev3", 00:09:29.594 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:29.594 "is_configured": true, 00:09:29.594 "data_offset": 2048, 00:09:29.594 "data_size": 63488 00:09:29.594 } 00:09:29.594 ] 00:09:29.594 }' 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.594 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.163 [2024-11-20 17:01:53.842635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.163 "name": "Existed_Raid", 00:09:30.163 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:30.163 "strip_size_kb": 0, 00:09:30.163 "state": "configuring", 00:09:30.163 "raid_level": "raid1", 00:09:30.163 "superblock": true, 00:09:30.163 "num_base_bdevs": 3, 00:09:30.163 "num_base_bdevs_discovered": 1, 00:09:30.163 "num_base_bdevs_operational": 3, 00:09:30.163 "base_bdevs_list": [ 00:09:30.163 { 00:09:30.163 "name": "BaseBdev1", 00:09:30.163 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:30.163 "is_configured": true, 00:09:30.163 "data_offset": 2048, 00:09:30.163 "data_size": 63488 00:09:30.163 }, 00:09:30.163 { 00:09:30.163 "name": null, 00:09:30.163 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:30.163 "is_configured": false, 00:09:30.163 "data_offset": 0, 00:09:30.163 "data_size": 63488 00:09:30.163 }, 00:09:30.163 { 00:09:30.163 "name": null, 00:09:30.163 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:30.163 "is_configured": false, 00:09:30.163 "data_offset": 0, 00:09:30.163 "data_size": 63488 00:09:30.163 } 00:09:30.163 ] 00:09:30.163 }' 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.163 17:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.732 [2024-11-20 17:01:54.430943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.732 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.733 "name": "Existed_Raid", 00:09:30.733 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:30.733 "strip_size_kb": 0, 00:09:30.733 "state": "configuring", 00:09:30.733 "raid_level": "raid1", 00:09:30.733 "superblock": true, 00:09:30.733 "num_base_bdevs": 3, 00:09:30.733 "num_base_bdevs_discovered": 2, 00:09:30.733 "num_base_bdevs_operational": 3, 00:09:30.733 "base_bdevs_list": [ 00:09:30.733 { 00:09:30.733 "name": "BaseBdev1", 00:09:30.733 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:30.733 "is_configured": true, 00:09:30.733 "data_offset": 2048, 00:09:30.733 "data_size": 63488 00:09:30.733 }, 00:09:30.733 { 00:09:30.733 "name": null, 00:09:30.733 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:30.733 "is_configured": false, 00:09:30.733 "data_offset": 0, 00:09:30.733 "data_size": 63488 00:09:30.733 }, 00:09:30.733 { 00:09:30.733 "name": "BaseBdev3", 00:09:30.733 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:30.733 "is_configured": true, 00:09:30.733 "data_offset": 2048, 00:09:30.733 "data_size": 63488 00:09:30.733 } 00:09:30.733 ] 00:09:30.733 }' 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.733 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.300 17:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.300 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.300 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 17:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 [2024-11-20 17:01:55.035077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.559 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.559 "name": "Existed_Raid", 00:09:31.559 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:31.559 "strip_size_kb": 0, 00:09:31.559 "state": "configuring", 00:09:31.559 "raid_level": "raid1", 00:09:31.559 "superblock": true, 00:09:31.559 "num_base_bdevs": 3, 00:09:31.559 "num_base_bdevs_discovered": 1, 00:09:31.559 "num_base_bdevs_operational": 3, 00:09:31.559 "base_bdevs_list": [ 00:09:31.559 { 00:09:31.559 "name": null, 00:09:31.559 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:31.559 "is_configured": false, 00:09:31.559 "data_offset": 0, 00:09:31.559 "data_size": 63488 00:09:31.559 }, 00:09:31.559 { 00:09:31.559 "name": null, 00:09:31.559 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:31.559 "is_configured": false, 00:09:31.559 "data_offset": 0, 00:09:31.559 "data_size": 63488 00:09:31.559 }, 00:09:31.559 { 00:09:31.559 "name": "BaseBdev3", 00:09:31.559 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:31.559 "is_configured": true, 00:09:31.559 "data_offset": 2048, 00:09:31.559 "data_size": 63488 00:09:31.559 } 00:09:31.559 ] 00:09:31.559 }' 00:09:31.559 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.559 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.818 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.818 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.818 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.818 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.818 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.076 [2024-11-20 17:01:55.713722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.076 "name": "Existed_Raid", 00:09:32.076 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:32.076 "strip_size_kb": 0, 00:09:32.076 "state": "configuring", 00:09:32.076 "raid_level": "raid1", 00:09:32.076 "superblock": true, 00:09:32.076 "num_base_bdevs": 3, 00:09:32.076 "num_base_bdevs_discovered": 2, 00:09:32.076 "num_base_bdevs_operational": 3, 00:09:32.076 "base_bdevs_list": [ 00:09:32.076 { 00:09:32.076 "name": null, 00:09:32.076 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:32.076 "is_configured": false, 00:09:32.076 "data_offset": 0, 00:09:32.076 "data_size": 63488 00:09:32.076 }, 00:09:32.076 { 00:09:32.076 "name": "BaseBdev2", 00:09:32.076 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:32.076 "is_configured": true, 00:09:32.076 "data_offset": 2048, 00:09:32.076 "data_size": 63488 00:09:32.076 }, 00:09:32.076 { 00:09:32.076 "name": "BaseBdev3", 00:09:32.076 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:32.076 "is_configured": true, 00:09:32.076 "data_offset": 2048, 00:09:32.076 "data_size": 63488 00:09:32.076 } 00:09:32.076 ] 00:09:32.076 }' 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.076 17:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07f004ff-98fe-456f-b274-6dedbb20affa 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 [2024-11-20 17:01:56.396091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.643 [2024-11-20 17:01:56.396514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.643 [2024-11-20 17:01:56.396538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.643 NewBaseBdev 00:09:32.643 [2024-11-20 17:01:56.396875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:32.643 [2024-11-20 17:01:56.397050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.643 [2024-11-20 17:01:56.397078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:32.643 [2024-11-20 17:01:56.397236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 [ 00:09:32.643 { 00:09:32.643 "name": "NewBaseBdev", 00:09:32.643 "aliases": [ 00:09:32.643 "07f004ff-98fe-456f-b274-6dedbb20affa" 00:09:32.643 ], 00:09:32.643 "product_name": "Malloc disk", 00:09:32.643 "block_size": 512, 00:09:32.643 "num_blocks": 65536, 00:09:32.643 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:32.643 "assigned_rate_limits": { 00:09:32.643 "rw_ios_per_sec": 0, 00:09:32.643 "rw_mbytes_per_sec": 0, 00:09:32.643 "r_mbytes_per_sec": 0, 00:09:32.643 "w_mbytes_per_sec": 0 00:09:32.643 }, 00:09:32.643 "claimed": true, 00:09:32.643 "claim_type": "exclusive_write", 00:09:32.643 "zoned": false, 00:09:32.643 "supported_io_types": { 00:09:32.643 "read": true, 00:09:32.643 "write": true, 00:09:32.643 "unmap": true, 00:09:32.643 "flush": true, 00:09:32.643 "reset": true, 00:09:32.643 "nvme_admin": false, 00:09:32.643 "nvme_io": false, 00:09:32.643 "nvme_io_md": false, 00:09:32.643 "write_zeroes": true, 00:09:32.643 "zcopy": true, 00:09:32.643 "get_zone_info": false, 00:09:32.643 "zone_management": false, 00:09:32.643 "zone_append": false, 00:09:32.643 "compare": false, 00:09:32.643 "compare_and_write": false, 00:09:32.643 "abort": true, 00:09:32.643 "seek_hole": false, 00:09:32.643 "seek_data": false, 00:09:32.643 "copy": true, 00:09:32.643 "nvme_iov_md": false 00:09:32.643 }, 00:09:32.643 "memory_domains": [ 00:09:32.643 { 00:09:32.643 "dma_device_id": "system", 00:09:32.643 "dma_device_type": 1 00:09:32.643 }, 00:09:32.643 { 00:09:32.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.643 "dma_device_type": 2 00:09:32.643 } 00:09:32.643 ], 00:09:32.643 "driver_specific": {} 00:09:32.643 } 00:09:32.643 ] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.643 "name": "Existed_Raid", 00:09:32.643 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:32.643 "strip_size_kb": 0, 00:09:32.643 "state": "online", 00:09:32.643 "raid_level": "raid1", 00:09:32.643 "superblock": true, 00:09:32.643 "num_base_bdevs": 3, 00:09:32.643 "num_base_bdevs_discovered": 3, 00:09:32.643 "num_base_bdevs_operational": 3, 00:09:32.643 "base_bdevs_list": [ 00:09:32.643 { 00:09:32.643 "name": "NewBaseBdev", 00:09:32.643 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:32.643 "is_configured": true, 00:09:32.643 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 }, 00:09:32.644 { 00:09:32.644 "name": "BaseBdev2", 00:09:32.644 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:32.644 "is_configured": true, 00:09:32.644 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 }, 00:09:32.644 { 00:09:32.644 "name": "BaseBdev3", 00:09:32.644 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:32.644 "is_configured": true, 00:09:32.644 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 } 00:09:32.644 ] 00:09:32.644 }' 00:09:32.644 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.644 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.211 [2024-11-20 17:01:56.960652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.211 17:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.211 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.211 "name": "Existed_Raid", 00:09:33.211 "aliases": [ 00:09:33.211 "994e53e8-e5f8-4297-825f-6f550c2056c4" 00:09:33.211 ], 00:09:33.211 "product_name": "Raid Volume", 00:09:33.211 "block_size": 512, 00:09:33.211 "num_blocks": 63488, 00:09:33.211 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:33.211 "assigned_rate_limits": { 00:09:33.211 "rw_ios_per_sec": 0, 00:09:33.211 "rw_mbytes_per_sec": 0, 00:09:33.211 "r_mbytes_per_sec": 0, 00:09:33.211 "w_mbytes_per_sec": 0 00:09:33.211 }, 00:09:33.211 "claimed": false, 00:09:33.211 "zoned": false, 00:09:33.211 "supported_io_types": { 00:09:33.211 "read": true, 00:09:33.211 "write": true, 00:09:33.211 "unmap": false, 00:09:33.211 "flush": false, 00:09:33.211 "reset": true, 00:09:33.211 "nvme_admin": false, 00:09:33.211 "nvme_io": false, 00:09:33.211 "nvme_io_md": false, 00:09:33.211 "write_zeroes": true, 00:09:33.211 "zcopy": false, 00:09:33.211 "get_zone_info": false, 00:09:33.211 "zone_management": false, 00:09:33.211 "zone_append": false, 00:09:33.211 "compare": false, 00:09:33.211 "compare_and_write": false, 00:09:33.211 "abort": false, 00:09:33.211 "seek_hole": false, 00:09:33.211 "seek_data": false, 00:09:33.211 "copy": false, 00:09:33.211 "nvme_iov_md": false 00:09:33.211 }, 00:09:33.211 "memory_domains": [ 00:09:33.211 { 00:09:33.211 "dma_device_id": "system", 00:09:33.211 "dma_device_type": 1 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.211 "dma_device_type": 2 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "dma_device_id": "system", 00:09:33.211 "dma_device_type": 1 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.211 "dma_device_type": 2 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "dma_device_id": "system", 00:09:33.211 "dma_device_type": 1 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.211 "dma_device_type": 2 00:09:33.211 } 00:09:33.211 ], 00:09:33.211 "driver_specific": { 00:09:33.211 "raid": { 00:09:33.211 "uuid": "994e53e8-e5f8-4297-825f-6f550c2056c4", 00:09:33.211 "strip_size_kb": 0, 00:09:33.211 "state": "online", 00:09:33.211 "raid_level": "raid1", 00:09:33.211 "superblock": true, 00:09:33.211 "num_base_bdevs": 3, 00:09:33.211 "num_base_bdevs_discovered": 3, 00:09:33.211 "num_base_bdevs_operational": 3, 00:09:33.211 "base_bdevs_list": [ 00:09:33.211 { 00:09:33.211 "name": "NewBaseBdev", 00:09:33.211 "uuid": "07f004ff-98fe-456f-b274-6dedbb20affa", 00:09:33.211 "is_configured": true, 00:09:33.211 "data_offset": 2048, 00:09:33.211 "data_size": 63488 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "name": "BaseBdev2", 00:09:33.211 "uuid": "87ebe4cb-9e68-4fab-9519-902e7e2aaf64", 00:09:33.211 "is_configured": true, 00:09:33.211 "data_offset": 2048, 00:09:33.211 "data_size": 63488 00:09:33.211 }, 00:09:33.211 { 00:09:33.211 "name": "BaseBdev3", 00:09:33.211 "uuid": "5aeda417-3465-4dcb-b2e2-0170e812164f", 00:09:33.211 "is_configured": true, 00:09:33.211 "data_offset": 2048, 00:09:33.211 "data_size": 63488 00:09:33.211 } 00:09:33.211 ] 00:09:33.211 } 00:09:33.211 } 00:09:33.211 }' 00:09:33.211 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.211 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:33.211 BaseBdev2 00:09:33.211 BaseBdev3' 00:09:33.211 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.471 [2024-11-20 17:01:57.284358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.471 [2024-11-20 17:01:57.284393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.471 [2024-11-20 17:01:57.284465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.471 [2024-11-20 17:01:57.284922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.471 [2024-11-20 17:01:57.284938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67895 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67895 ']' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67895 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67895 00:09:33.471 killing process with pid 67895 00:09:33.471 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.472 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.472 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67895' 00:09:33.472 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67895 00:09:33.472 [2024-11-20 17:01:57.321090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.472 17:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67895 00:09:34.038 [2024-11-20 17:01:57.606205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.973 17:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.973 00:09:34.973 real 0m12.104s 00:09:34.973 user 0m20.192s 00:09:34.973 sys 0m1.526s 00:09:34.973 ************************************ 00:09:34.973 END TEST raid_state_function_test_sb 00:09:34.973 ************************************ 00:09:34.973 17:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.973 17:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.973 17:01:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:34.973 17:01:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.973 17:01:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.973 17:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.973 ************************************ 00:09:34.973 START TEST raid_superblock_test 00:09:34.973 ************************************ 00:09:34.973 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:34.973 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68534 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68534 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68534 ']' 00:09:34.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.974 17:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.233 [2024-11-20 17:01:58.931378] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:35.233 [2024-11-20 17:01:58.931837] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68534 ] 00:09:35.495 [2024-11-20 17:01:59.123745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.495 [2024-11-20 17:01:59.294252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.758 [2024-11-20 17:01:59.546561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.758 [2024-11-20 17:01:59.546601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:01:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 malloc1 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 [2024-11-20 17:02:00.025458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:36.327 [2024-11-20 17:02:00.025679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.327 [2024-11-20 17:02:00.025784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:36.327 [2024-11-20 17:02:00.026046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.327 [2024-11-20 17:02:00.028944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.327 [2024-11-20 17:02:00.029123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:36.327 pt1 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 malloc2 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 [2024-11-20 17:02:00.082630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.327 [2024-11-20 17:02:00.082716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.327 [2024-11-20 17:02:00.082750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:36.327 [2024-11-20 17:02:00.082780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.327 [2024-11-20 17:02:00.085736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.327 [2024-11-20 17:02:00.085834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.327 pt2 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 malloc3 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 [2024-11-20 17:02:00.148652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:36.327 [2024-11-20 17:02:00.148901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.327 [2024-11-20 17:02:00.148947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:36.327 [2024-11-20 17:02:00.148966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.327 [2024-11-20 17:02:00.151868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.327 [2024-11-20 17:02:00.152047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:36.327 pt3 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.327 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.327 [2024-11-20 17:02:00.160929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:36.327 [2024-11-20 17:02:00.163463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.327 [2024-11-20 17:02:00.163561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:36.327 [2024-11-20 17:02:00.163858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:36.327 [2024-11-20 17:02:00.163885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.327 [2024-11-20 17:02:00.164208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:36.328 [2024-11-20 17:02:00.164442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:36.328 [2024-11-20 17:02:00.164460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:36.328 [2024-11-20 17:02:00.164618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.328 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.586 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.586 "name": "raid_bdev1", 00:09:36.586 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:36.586 "strip_size_kb": 0, 00:09:36.586 "state": "online", 00:09:36.586 "raid_level": "raid1", 00:09:36.586 "superblock": true, 00:09:36.586 "num_base_bdevs": 3, 00:09:36.586 "num_base_bdevs_discovered": 3, 00:09:36.586 "num_base_bdevs_operational": 3, 00:09:36.586 "base_bdevs_list": [ 00:09:36.586 { 00:09:36.586 "name": "pt1", 00:09:36.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:36.586 "is_configured": true, 00:09:36.586 "data_offset": 2048, 00:09:36.586 "data_size": 63488 00:09:36.586 }, 00:09:36.586 { 00:09:36.586 "name": "pt2", 00:09:36.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:36.586 "is_configured": true, 00:09:36.586 "data_offset": 2048, 00:09:36.586 "data_size": 63488 00:09:36.586 }, 00:09:36.586 { 00:09:36.586 "name": "pt3", 00:09:36.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.586 "is_configured": true, 00:09:36.586 "data_offset": 2048, 00:09:36.586 "data_size": 63488 00:09:36.586 } 00:09:36.586 ] 00:09:36.586 }' 00:09:36.587 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.587 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.845 [2024-11-20 17:02:00.681486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.845 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.104 "name": "raid_bdev1", 00:09:37.104 "aliases": [ 00:09:37.104 "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8" 00:09:37.104 ], 00:09:37.104 "product_name": "Raid Volume", 00:09:37.104 "block_size": 512, 00:09:37.104 "num_blocks": 63488, 00:09:37.104 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:37.104 "assigned_rate_limits": { 00:09:37.104 "rw_ios_per_sec": 0, 00:09:37.104 "rw_mbytes_per_sec": 0, 00:09:37.104 "r_mbytes_per_sec": 0, 00:09:37.104 "w_mbytes_per_sec": 0 00:09:37.104 }, 00:09:37.104 "claimed": false, 00:09:37.104 "zoned": false, 00:09:37.104 "supported_io_types": { 00:09:37.104 "read": true, 00:09:37.104 "write": true, 00:09:37.104 "unmap": false, 00:09:37.104 "flush": false, 00:09:37.104 "reset": true, 00:09:37.104 "nvme_admin": false, 00:09:37.104 "nvme_io": false, 00:09:37.104 "nvme_io_md": false, 00:09:37.104 "write_zeroes": true, 00:09:37.104 "zcopy": false, 00:09:37.104 "get_zone_info": false, 00:09:37.104 "zone_management": false, 00:09:37.104 "zone_append": false, 00:09:37.104 "compare": false, 00:09:37.104 "compare_and_write": false, 00:09:37.104 "abort": false, 00:09:37.104 "seek_hole": false, 00:09:37.104 "seek_data": false, 00:09:37.104 "copy": false, 00:09:37.104 "nvme_iov_md": false 00:09:37.104 }, 00:09:37.104 "memory_domains": [ 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "system", 00:09:37.104 "dma_device_type": 1 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.104 "dma_device_type": 2 00:09:37.104 } 00:09:37.104 ], 00:09:37.104 "driver_specific": { 00:09:37.104 "raid": { 00:09:37.104 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:37.104 "strip_size_kb": 0, 00:09:37.104 "state": "online", 00:09:37.104 "raid_level": "raid1", 00:09:37.104 "superblock": true, 00:09:37.104 "num_base_bdevs": 3, 00:09:37.104 "num_base_bdevs_discovered": 3, 00:09:37.104 "num_base_bdevs_operational": 3, 00:09:37.104 "base_bdevs_list": [ 00:09:37.104 { 00:09:37.104 "name": "pt1", 00:09:37.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.104 "is_configured": true, 00:09:37.104 "data_offset": 2048, 00:09:37.104 "data_size": 63488 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "name": "pt2", 00:09:37.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.104 "is_configured": true, 00:09:37.104 "data_offset": 2048, 00:09:37.104 "data_size": 63488 00:09:37.104 }, 00:09:37.104 { 00:09:37.104 "name": "pt3", 00:09:37.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.104 "is_configured": true, 00:09:37.104 "data_offset": 2048, 00:09:37.104 "data_size": 63488 00:09:37.104 } 00:09:37.104 ] 00:09:37.104 } 00:09:37.104 } 00:09:37.104 }' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:37.104 pt2 00:09:37.104 pt3' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.104 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 17:02:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:37.363 [2024-11-20 17:02:01.001551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 ']' 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 [2024-11-20 17:02:01.053234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.363 [2024-11-20 17:02:01.053271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.363 [2024-11-20 17:02:01.053398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.363 [2024-11-20 17:02:01.053493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.363 [2024-11-20 17:02:01.053509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.363 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.364 [2024-11-20 17:02:01.205382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:37.364 [2024-11-20 17:02:01.208083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:37.364 [2024-11-20 17:02:01.208195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:37.364 [2024-11-20 17:02:01.208283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:37.364 [2024-11-20 17:02:01.208390] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:37.364 [2024-11-20 17:02:01.208425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:37.364 [2024-11-20 17:02:01.208453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.364 [2024-11-20 17:02:01.208468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:37.364 request: 00:09:37.364 { 00:09:37.364 "name": "raid_bdev1", 00:09:37.364 "raid_level": "raid1", 00:09:37.364 "base_bdevs": [ 00:09:37.364 "malloc1", 00:09:37.364 "malloc2", 00:09:37.364 "malloc3" 00:09:37.364 ], 00:09:37.364 "superblock": false, 00:09:37.364 "method": "bdev_raid_create", 00:09:37.364 "req_id": 1 00:09:37.364 } 00:09:37.364 Got JSON-RPC error response 00:09:37.364 response: 00:09:37.364 { 00:09:37.364 "code": -17, 00:09:37.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:37.364 } 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.364 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.623 [2024-11-20 17:02:01.277341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:37.623 [2024-11-20 17:02:01.277537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.623 [2024-11-20 17:02:01.277579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:37.623 [2024-11-20 17:02:01.277596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.623 [2024-11-20 17:02:01.280458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.623 [2024-11-20 17:02:01.280500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:37.623 [2024-11-20 17:02:01.280605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:37.623 [2024-11-20 17:02:01.280685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:37.623 pt1 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.623 "name": "raid_bdev1", 00:09:37.623 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:37.623 "strip_size_kb": 0, 00:09:37.623 "state": "configuring", 00:09:37.623 "raid_level": "raid1", 00:09:37.623 "superblock": true, 00:09:37.623 "num_base_bdevs": 3, 00:09:37.623 "num_base_bdevs_discovered": 1, 00:09:37.623 "num_base_bdevs_operational": 3, 00:09:37.623 "base_bdevs_list": [ 00:09:37.623 { 00:09:37.623 "name": "pt1", 00:09:37.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:37.623 "is_configured": true, 00:09:37.623 "data_offset": 2048, 00:09:37.623 "data_size": 63488 00:09:37.623 }, 00:09:37.623 { 00:09:37.623 "name": null, 00:09:37.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:37.623 "is_configured": false, 00:09:37.623 "data_offset": 2048, 00:09:37.623 "data_size": 63488 00:09:37.623 }, 00:09:37.623 { 00:09:37.623 "name": null, 00:09:37.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:37.623 "is_configured": false, 00:09:37.623 "data_offset": 2048, 00:09:37.623 "data_size": 63488 00:09:37.623 } 00:09:37.623 ] 00:09:37.623 }' 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.623 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.190 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.191 [2024-11-20 17:02:01.793608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.191 [2024-11-20 17:02:01.793826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.191 [2024-11-20 17:02:01.793907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:38.191 [2024-11-20 17:02:01.794065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.191 [2024-11-20 17:02:01.794673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.191 [2024-11-20 17:02:01.794901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.191 [2024-11-20 17:02:01.795141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:38.191 [2024-11-20 17:02:01.795285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.191 pt2 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.191 [2024-11-20 17:02:01.801553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.191 "name": "raid_bdev1", 00:09:38.191 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:38.191 "strip_size_kb": 0, 00:09:38.191 "state": "configuring", 00:09:38.191 "raid_level": "raid1", 00:09:38.191 "superblock": true, 00:09:38.191 "num_base_bdevs": 3, 00:09:38.191 "num_base_bdevs_discovered": 1, 00:09:38.191 "num_base_bdevs_operational": 3, 00:09:38.191 "base_bdevs_list": [ 00:09:38.191 { 00:09:38.191 "name": "pt1", 00:09:38.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.191 "is_configured": true, 00:09:38.191 "data_offset": 2048, 00:09:38.191 "data_size": 63488 00:09:38.191 }, 00:09:38.191 { 00:09:38.191 "name": null, 00:09:38.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.191 "is_configured": false, 00:09:38.191 "data_offset": 0, 00:09:38.191 "data_size": 63488 00:09:38.191 }, 00:09:38.191 { 00:09:38.191 "name": null, 00:09:38.191 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.191 "is_configured": false, 00:09:38.191 "data_offset": 2048, 00:09:38.191 "data_size": 63488 00:09:38.191 } 00:09:38.191 ] 00:09:38.191 }' 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.191 17:02:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.759 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:38.759 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.759 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.759 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.759 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.759 [2024-11-20 17:02:02.345818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.759 [2024-11-20 17:02:02.345907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.759 [2024-11-20 17:02:02.345937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:38.759 [2024-11-20 17:02:02.345956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.759 [2024-11-20 17:02:02.346526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.759 [2024-11-20 17:02:02.346556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.760 [2024-11-20 17:02:02.346650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:38.760 [2024-11-20 17:02:02.346698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.760 pt2 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.760 [2024-11-20 17:02:02.353761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:38.760 [2024-11-20 17:02:02.353874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.760 [2024-11-20 17:02:02.353921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:38.760 [2024-11-20 17:02:02.353939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.760 [2024-11-20 17:02:02.354376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.760 [2024-11-20 17:02:02.354423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:38.760 [2024-11-20 17:02:02.354500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:38.760 [2024-11-20 17:02:02.354535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:38.760 [2024-11-20 17:02:02.354696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.760 [2024-11-20 17:02:02.354719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.760 [2024-11-20 17:02:02.355074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:38.760 [2024-11-20 17:02:02.355300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.760 [2024-11-20 17:02:02.355315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.760 [2024-11-20 17:02:02.355504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.760 pt3 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.760 "name": "raid_bdev1", 00:09:38.760 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:38.760 "strip_size_kb": 0, 00:09:38.760 "state": "online", 00:09:38.760 "raid_level": "raid1", 00:09:38.760 "superblock": true, 00:09:38.760 "num_base_bdevs": 3, 00:09:38.760 "num_base_bdevs_discovered": 3, 00:09:38.760 "num_base_bdevs_operational": 3, 00:09:38.760 "base_bdevs_list": [ 00:09:38.760 { 00:09:38.760 "name": "pt1", 00:09:38.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.760 "is_configured": true, 00:09:38.760 "data_offset": 2048, 00:09:38.760 "data_size": 63488 00:09:38.760 }, 00:09:38.760 { 00:09:38.760 "name": "pt2", 00:09:38.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.760 "is_configured": true, 00:09:38.760 "data_offset": 2048, 00:09:38.760 "data_size": 63488 00:09:38.760 }, 00:09:38.760 { 00:09:38.760 "name": "pt3", 00:09:38.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.760 "is_configured": true, 00:09:38.760 "data_offset": 2048, 00:09:38.760 "data_size": 63488 00:09:38.760 } 00:09:38.760 ] 00:09:38.760 }' 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.760 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.019 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.019 [2024-11-20 17:02:02.882404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.278 17:02:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.278 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.278 "name": "raid_bdev1", 00:09:39.278 "aliases": [ 00:09:39.278 "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8" 00:09:39.278 ], 00:09:39.278 "product_name": "Raid Volume", 00:09:39.278 "block_size": 512, 00:09:39.278 "num_blocks": 63488, 00:09:39.278 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:39.278 "assigned_rate_limits": { 00:09:39.278 "rw_ios_per_sec": 0, 00:09:39.278 "rw_mbytes_per_sec": 0, 00:09:39.278 "r_mbytes_per_sec": 0, 00:09:39.278 "w_mbytes_per_sec": 0 00:09:39.278 }, 00:09:39.278 "claimed": false, 00:09:39.278 "zoned": false, 00:09:39.278 "supported_io_types": { 00:09:39.278 "read": true, 00:09:39.278 "write": true, 00:09:39.278 "unmap": false, 00:09:39.278 "flush": false, 00:09:39.278 "reset": true, 00:09:39.278 "nvme_admin": false, 00:09:39.278 "nvme_io": false, 00:09:39.278 "nvme_io_md": false, 00:09:39.278 "write_zeroes": true, 00:09:39.278 "zcopy": false, 00:09:39.278 "get_zone_info": false, 00:09:39.278 "zone_management": false, 00:09:39.278 "zone_append": false, 00:09:39.278 "compare": false, 00:09:39.278 "compare_and_write": false, 00:09:39.278 "abort": false, 00:09:39.278 "seek_hole": false, 00:09:39.278 "seek_data": false, 00:09:39.278 "copy": false, 00:09:39.278 "nvme_iov_md": false 00:09:39.278 }, 00:09:39.278 "memory_domains": [ 00:09:39.278 { 00:09:39.278 "dma_device_id": "system", 00:09:39.278 "dma_device_type": 1 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.278 "dma_device_type": 2 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "dma_device_id": "system", 00:09:39.278 "dma_device_type": 1 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.278 "dma_device_type": 2 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "dma_device_id": "system", 00:09:39.278 "dma_device_type": 1 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.278 "dma_device_type": 2 00:09:39.278 } 00:09:39.278 ], 00:09:39.278 "driver_specific": { 00:09:39.278 "raid": { 00:09:39.278 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:39.278 "strip_size_kb": 0, 00:09:39.278 "state": "online", 00:09:39.278 "raid_level": "raid1", 00:09:39.278 "superblock": true, 00:09:39.278 "num_base_bdevs": 3, 00:09:39.278 "num_base_bdevs_discovered": 3, 00:09:39.278 "num_base_bdevs_operational": 3, 00:09:39.278 "base_bdevs_list": [ 00:09:39.278 { 00:09:39.278 "name": "pt1", 00:09:39.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.278 "is_configured": true, 00:09:39.278 "data_offset": 2048, 00:09:39.278 "data_size": 63488 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "name": "pt2", 00:09:39.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.278 "is_configured": true, 00:09:39.278 "data_offset": 2048, 00:09:39.278 "data_size": 63488 00:09:39.278 }, 00:09:39.278 { 00:09:39.278 "name": "pt3", 00:09:39.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.278 "is_configured": true, 00:09:39.278 "data_offset": 2048, 00:09:39.278 "data_size": 63488 00:09:39.278 } 00:09:39.278 ] 00:09:39.278 } 00:09:39.278 } 00:09:39.278 }' 00:09:39.278 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.278 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.278 pt2 00:09:39.278 pt3' 00:09:39.278 17:02:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.278 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.278 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.279 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.537 [2024-11-20 17:02:03.174404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.537 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 '!=' 8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 ']' 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.538 [2024-11-20 17:02:03.222140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.538 "name": "raid_bdev1", 00:09:39.538 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:39.538 "strip_size_kb": 0, 00:09:39.538 "state": "online", 00:09:39.538 "raid_level": "raid1", 00:09:39.538 "superblock": true, 00:09:39.538 "num_base_bdevs": 3, 00:09:39.538 "num_base_bdevs_discovered": 2, 00:09:39.538 "num_base_bdevs_operational": 2, 00:09:39.538 "base_bdevs_list": [ 00:09:39.538 { 00:09:39.538 "name": null, 00:09:39.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.538 "is_configured": false, 00:09:39.538 "data_offset": 0, 00:09:39.538 "data_size": 63488 00:09:39.538 }, 00:09:39.538 { 00:09:39.538 "name": "pt2", 00:09:39.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.538 "is_configured": true, 00:09:39.538 "data_offset": 2048, 00:09:39.538 "data_size": 63488 00:09:39.538 }, 00:09:39.538 { 00:09:39.538 "name": "pt3", 00:09:39.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.538 "is_configured": true, 00:09:39.538 "data_offset": 2048, 00:09:39.538 "data_size": 63488 00:09:39.538 } 00:09:39.538 ] 00:09:39.538 }' 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.538 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 [2024-11-20 17:02:03.742289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.105 [2024-11-20 17:02:03.742322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.105 [2024-11-20 17:02:03.742443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.105 [2024-11-20 17:02:03.742532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.105 [2024-11-20 17:02:03.742556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 [2024-11-20 17:02:03.830241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.105 [2024-11-20 17:02:03.830336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.105 [2024-11-20 17:02:03.830376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:40.105 [2024-11-20 17:02:03.830393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.105 [2024-11-20 17:02:03.833398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.105 [2024-11-20 17:02:03.833470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.105 [2024-11-20 17:02:03.833591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.105 [2024-11-20 17:02:03.833651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.105 pt2 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.105 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.106 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.106 "name": "raid_bdev1", 00:09:40.106 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:40.106 "strip_size_kb": 0, 00:09:40.106 "state": "configuring", 00:09:40.106 "raid_level": "raid1", 00:09:40.106 "superblock": true, 00:09:40.106 "num_base_bdevs": 3, 00:09:40.106 "num_base_bdevs_discovered": 1, 00:09:40.106 "num_base_bdevs_operational": 2, 00:09:40.106 "base_bdevs_list": [ 00:09:40.106 { 00:09:40.106 "name": null, 00:09:40.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.106 "is_configured": false, 00:09:40.106 "data_offset": 2048, 00:09:40.106 "data_size": 63488 00:09:40.106 }, 00:09:40.106 { 00:09:40.106 "name": "pt2", 00:09:40.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.106 "is_configured": true, 00:09:40.106 "data_offset": 2048, 00:09:40.106 "data_size": 63488 00:09:40.106 }, 00:09:40.106 { 00:09:40.106 "name": null, 00:09:40.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.106 "is_configured": false, 00:09:40.106 "data_offset": 2048, 00:09:40.106 "data_size": 63488 00:09:40.106 } 00:09:40.106 ] 00:09:40.106 }' 00:09:40.106 17:02:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.106 17:02:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.675 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.675 [2024-11-20 17:02:04.358422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.675 [2024-11-20 17:02:04.358502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.675 [2024-11-20 17:02:04.358533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:40.675 [2024-11-20 17:02:04.358552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.675 [2024-11-20 17:02:04.359119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.675 [2024-11-20 17:02:04.359154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.675 [2024-11-20 17:02:04.359312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.675 [2024-11-20 17:02:04.359361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.675 [2024-11-20 17:02:04.359535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.675 [2024-11-20 17:02:04.359557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.675 [2024-11-20 17:02:04.359901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:40.675 [2024-11-20 17:02:04.360107] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.676 [2024-11-20 17:02:04.360124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:40.676 [2024-11-20 17:02:04.360297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.676 pt3 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.676 "name": "raid_bdev1", 00:09:40.676 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:40.676 "strip_size_kb": 0, 00:09:40.676 "state": "online", 00:09:40.676 "raid_level": "raid1", 00:09:40.676 "superblock": true, 00:09:40.676 "num_base_bdevs": 3, 00:09:40.676 "num_base_bdevs_discovered": 2, 00:09:40.676 "num_base_bdevs_operational": 2, 00:09:40.676 "base_bdevs_list": [ 00:09:40.676 { 00:09:40.676 "name": null, 00:09:40.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.676 "is_configured": false, 00:09:40.676 "data_offset": 2048, 00:09:40.676 "data_size": 63488 00:09:40.676 }, 00:09:40.676 { 00:09:40.676 "name": "pt2", 00:09:40.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.676 "is_configured": true, 00:09:40.676 "data_offset": 2048, 00:09:40.676 "data_size": 63488 00:09:40.676 }, 00:09:40.676 { 00:09:40.676 "name": "pt3", 00:09:40.676 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.676 "is_configured": true, 00:09:40.676 "data_offset": 2048, 00:09:40.676 "data_size": 63488 00:09:40.676 } 00:09:40.676 ] 00:09:40.676 }' 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.676 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 [2024-11-20 17:02:04.866592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.244 [2024-11-20 17:02:04.866626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.244 [2024-11-20 17:02:04.866709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.244 [2024-11-20 17:02:04.866848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.244 [2024-11-20 17:02:04.866872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 [2024-11-20 17:02:04.938638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.244 [2024-11-20 17:02:04.938844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.244 [2024-11-20 17:02:04.938885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:41.244 [2024-11-20 17:02:04.938902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.244 [2024-11-20 17:02:04.941869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.244 [2024-11-20 17:02:04.942049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.244 [2024-11-20 17:02:04.942180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:41.244 [2024-11-20 17:02:04.942243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.244 [2024-11-20 17:02:04.942420] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:41.244 [2024-11-20 17:02:04.942439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.244 [2024-11-20 17:02:04.942462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:41.244 [2024-11-20 17:02:04.942538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.244 pt1 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.244 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.244 "name": "raid_bdev1", 00:09:41.244 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:41.244 "strip_size_kb": 0, 00:09:41.244 "state": "configuring", 00:09:41.244 "raid_level": "raid1", 00:09:41.244 "superblock": true, 00:09:41.244 "num_base_bdevs": 3, 00:09:41.244 "num_base_bdevs_discovered": 1, 00:09:41.244 "num_base_bdevs_operational": 2, 00:09:41.244 "base_bdevs_list": [ 00:09:41.244 { 00:09:41.244 "name": null, 00:09:41.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.245 "is_configured": false, 00:09:41.245 "data_offset": 2048, 00:09:41.245 "data_size": 63488 00:09:41.245 }, 00:09:41.245 { 00:09:41.245 "name": "pt2", 00:09:41.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.245 "is_configured": true, 00:09:41.245 "data_offset": 2048, 00:09:41.245 "data_size": 63488 00:09:41.245 }, 00:09:41.245 { 00:09:41.245 "name": null, 00:09:41.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.245 "is_configured": false, 00:09:41.245 "data_offset": 2048, 00:09:41.245 "data_size": 63488 00:09:41.245 } 00:09:41.245 ] 00:09:41.245 }' 00:09:41.245 17:02:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.245 17:02:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.813 [2024-11-20 17:02:05.515013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.813 [2024-11-20 17:02:05.515114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.813 [2024-11-20 17:02:05.515149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:41.813 [2024-11-20 17:02:05.515165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.813 [2024-11-20 17:02:05.515845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.813 [2024-11-20 17:02:05.515876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.813 [2024-11-20 17:02:05.515984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.813 [2024-11-20 17:02:05.516017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.813 [2024-11-20 17:02:05.516210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:41.813 [2024-11-20 17:02:05.516232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.813 [2024-11-20 17:02:05.516669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:41.813 [2024-11-20 17:02:05.516867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:41.813 [2024-11-20 17:02:05.516916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:41.813 [2024-11-20 17:02:05.517088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.813 pt3 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.813 "name": "raid_bdev1", 00:09:41.813 "uuid": "8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8", 00:09:41.813 "strip_size_kb": 0, 00:09:41.813 "state": "online", 00:09:41.813 "raid_level": "raid1", 00:09:41.813 "superblock": true, 00:09:41.813 "num_base_bdevs": 3, 00:09:41.813 "num_base_bdevs_discovered": 2, 00:09:41.813 "num_base_bdevs_operational": 2, 00:09:41.813 "base_bdevs_list": [ 00:09:41.813 { 00:09:41.813 "name": null, 00:09:41.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.813 "is_configured": false, 00:09:41.813 "data_offset": 2048, 00:09:41.813 "data_size": 63488 00:09:41.813 }, 00:09:41.813 { 00:09:41.813 "name": "pt2", 00:09:41.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.813 "is_configured": true, 00:09:41.813 "data_offset": 2048, 00:09:41.813 "data_size": 63488 00:09:41.813 }, 00:09:41.813 { 00:09:41.813 "name": "pt3", 00:09:41.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.813 "is_configured": true, 00:09:41.813 "data_offset": 2048, 00:09:41.813 "data_size": 63488 00:09:41.813 } 00:09:41.813 ] 00:09:41.813 }' 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.813 17:02:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:42.381 [2024-11-20 17:02:06.111542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 '!=' 8aeef8a9-dbc0-4cae-b3a4-b883aade5ca8 ']' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68534 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68534 ']' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68534 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68534 00:09:42.381 killing process with pid 68534 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68534' 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68534 00:09:42.381 [2024-11-20 17:02:06.194004] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.381 17:02:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68534 00:09:42.381 [2024-11-20 17:02:06.194122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.381 [2024-11-20 17:02:06.194207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.381 [2024-11-20 17:02:06.194227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:42.640 [2024-11-20 17:02:06.473461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.015 17:02:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.015 00:09:44.015 real 0m8.716s 00:09:44.015 user 0m14.317s 00:09:44.015 sys 0m1.179s 00:09:44.015 17:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.015 17:02:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.015 ************************************ 00:09:44.015 END TEST raid_superblock_test 00:09:44.015 ************************************ 00:09:44.015 17:02:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:44.015 17:02:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.015 17:02:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.015 17:02:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.015 ************************************ 00:09:44.015 START TEST raid_read_error_test 00:09:44.015 ************************************ 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.015 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0llnyrE09D 00:09:44.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68985 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68985 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68985 ']' 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.016 17:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.016 [2024-11-20 17:02:07.694669] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:44.016 [2024-11-20 17:02:07.694859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68985 ] 00:09:44.016 [2024-11-20 17:02:07.870472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.274 [2024-11-20 17:02:07.999252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.532 [2024-11-20 17:02:08.205120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.532 [2024-11-20 17:02:08.205193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 BaseBdev1_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 true 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 [2024-11-20 17:02:08.801858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.099 [2024-11-20 17:02:08.801936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.099 [2024-11-20 17:02:08.801964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.099 [2024-11-20 17:02:08.801982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.099 [2024-11-20 17:02:08.805056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.099 [2024-11-20 17:02:08.805105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.099 BaseBdev1 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 BaseBdev2_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.099 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.099 true 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 [2024-11-20 17:02:08.861934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.100 [2024-11-20 17:02:08.862216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.100 [2024-11-20 17:02:08.862251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.100 [2024-11-20 17:02:08.862269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.100 [2024-11-20 17:02:08.865278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.100 [2024-11-20 17:02:08.865496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.100 BaseBdev2 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 BaseBdev3_malloc 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 true 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 [2024-11-20 17:02:08.931480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.100 [2024-11-20 17:02:08.931546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.100 [2024-11-20 17:02:08.931573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:45.100 [2024-11-20 17:02:08.931591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.100 [2024-11-20 17:02:08.934786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.100 [2024-11-20 17:02:08.934866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.100 BaseBdev3 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.100 [2024-11-20 17:02:08.943704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.100 [2024-11-20 17:02:08.946488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.100 [2024-11-20 17:02:08.946726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.100 [2024-11-20 17:02:08.947126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.100 [2024-11-20 17:02:08.947145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.100 [2024-11-20 17:02:08.947482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:45.100 [2024-11-20 17:02:08.947717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.100 [2024-11-20 17:02:08.947767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.100 [2024-11-20 17:02:08.948022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.100 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.359 17:02:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.359 17:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.359 "name": "raid_bdev1", 00:09:45.359 "uuid": "327497d5-42ab-4575-b337-ce9286b23e29", 00:09:45.359 "strip_size_kb": 0, 00:09:45.359 "state": "online", 00:09:45.359 "raid_level": "raid1", 00:09:45.359 "superblock": true, 00:09:45.359 "num_base_bdevs": 3, 00:09:45.359 "num_base_bdevs_discovered": 3, 00:09:45.359 "num_base_bdevs_operational": 3, 00:09:45.359 "base_bdevs_list": [ 00:09:45.359 { 00:09:45.359 "name": "BaseBdev1", 00:09:45.359 "uuid": "42279473-0cbb-5318-ad4a-bb88a6a00082", 00:09:45.359 "is_configured": true, 00:09:45.359 "data_offset": 2048, 00:09:45.359 "data_size": 63488 00:09:45.359 }, 00:09:45.359 { 00:09:45.359 "name": "BaseBdev2", 00:09:45.359 "uuid": "b463cc29-3915-52bc-af6e-2c8b2fa6be60", 00:09:45.359 "is_configured": true, 00:09:45.359 "data_offset": 2048, 00:09:45.359 "data_size": 63488 00:09:45.359 }, 00:09:45.359 { 00:09:45.359 "name": "BaseBdev3", 00:09:45.359 "uuid": "b6e15c29-2b0c-5fa2-8669-313d3bad71f6", 00:09:45.359 "is_configured": true, 00:09:45.359 "data_offset": 2048, 00:09:45.359 "data_size": 63488 00:09:45.359 } 00:09:45.359 ] 00:09:45.359 }' 00:09:45.359 17:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.359 17:02:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.926 17:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:45.926 17:02:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:45.926 [2024-11-20 17:02:09.653939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.861 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.861 "name": "raid_bdev1", 00:09:46.861 "uuid": "327497d5-42ab-4575-b337-ce9286b23e29", 00:09:46.861 "strip_size_kb": 0, 00:09:46.861 "state": "online", 00:09:46.861 "raid_level": "raid1", 00:09:46.861 "superblock": true, 00:09:46.861 "num_base_bdevs": 3, 00:09:46.861 "num_base_bdevs_discovered": 3, 00:09:46.861 "num_base_bdevs_operational": 3, 00:09:46.862 "base_bdevs_list": [ 00:09:46.862 { 00:09:46.862 "name": "BaseBdev1", 00:09:46.862 "uuid": "42279473-0cbb-5318-ad4a-bb88a6a00082", 00:09:46.862 "is_configured": true, 00:09:46.862 "data_offset": 2048, 00:09:46.862 "data_size": 63488 00:09:46.862 }, 00:09:46.862 { 00:09:46.862 "name": "BaseBdev2", 00:09:46.862 "uuid": "b463cc29-3915-52bc-af6e-2c8b2fa6be60", 00:09:46.862 "is_configured": true, 00:09:46.862 "data_offset": 2048, 00:09:46.862 "data_size": 63488 00:09:46.862 }, 00:09:46.862 { 00:09:46.862 "name": "BaseBdev3", 00:09:46.862 "uuid": "b6e15c29-2b0c-5fa2-8669-313d3bad71f6", 00:09:46.862 "is_configured": true, 00:09:46.862 "data_offset": 2048, 00:09:46.862 "data_size": 63488 00:09:46.862 } 00:09:46.862 ] 00:09:46.862 }' 00:09:46.862 17:02:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.862 17:02:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.429 17:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.430 [2024-11-20 17:02:11.090554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.430 [2024-11-20 17:02:11.090590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.430 { 00:09:47.430 "results": [ 00:09:47.430 { 00:09:47.430 "job": "raid_bdev1", 00:09:47.430 "core_mask": "0x1", 00:09:47.430 "workload": "randrw", 00:09:47.430 "percentage": 50, 00:09:47.430 "status": "finished", 00:09:47.430 "queue_depth": 1, 00:09:47.430 "io_size": 131072, 00:09:47.430 "runtime": 1.433819, 00:09:47.430 "iops": 8710.304438705303, 00:09:47.430 "mibps": 1088.788054838163, 00:09:47.430 "io_failed": 0, 00:09:47.430 "io_timeout": 0, 00:09:47.430 "avg_latency_us": 110.20622132931526, 00:09:47.430 "min_latency_us": 39.79636363636364, 00:09:47.430 "max_latency_us": 1966.08 00:09:47.430 } 00:09:47.430 ], 00:09:47.430 "core_count": 1 00:09:47.430 } 00:09:47.430 [2024-11-20 17:02:11.094285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.430 [2024-11-20 17:02:11.094347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.430 [2024-11-20 17:02:11.094539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.430 [2024-11-20 17:02:11.094559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68985 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68985 ']' 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68985 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68985 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.430 killing process with pid 68985 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68985' 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68985 00:09:47.430 [2024-11-20 17:02:11.137674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.430 17:02:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68985 00:09:47.689 [2024-11-20 17:02:11.351830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0llnyrE09D 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.740 ************************************ 00:09:48.740 END TEST raid_read_error_test 00:09:48.740 ************************************ 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.740 00:09:48.740 real 0m4.881s 00:09:48.740 user 0m6.161s 00:09:48.740 sys 0m0.593s 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.740 17:02:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.740 17:02:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:48.740 17:02:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.740 17:02:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.740 17:02:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.740 ************************************ 00:09:48.740 START TEST raid_write_error_test 00:09:48.740 ************************************ 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dKcFc2FXjc 00:09:48.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69131 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69131 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69131 ']' 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.740 17:02:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.999 [2024-11-20 17:02:12.655503] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:48.999 [2024-11-20 17:02:12.655936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69131 ] 00:09:48.999 [2024-11-20 17:02:12.847879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.257 [2024-11-20 17:02:12.991211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.516 [2024-11-20 17:02:13.203348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.516 [2024-11-20 17:02:13.203438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.082 BaseBdev1_malloc 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.082 true 00:09:50.082 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-11-20 17:02:13.731508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.083 [2024-11-20 17:02:13.731574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.083 [2024-11-20 17:02:13.731604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:50.083 [2024-11-20 17:02:13.731621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.083 [2024-11-20 17:02:13.734563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.083 [2024-11-20 17:02:13.734613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.083 BaseBdev1 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 BaseBdev2_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 true 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-11-20 17:02:13.798006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.083 [2024-11-20 17:02:13.798072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.083 [2024-11-20 17:02:13.798107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:50.083 [2024-11-20 17:02:13.798124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.083 [2024-11-20 17:02:13.801097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.083 [2024-11-20 17:02:13.801145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.083 BaseBdev2 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 BaseBdev3_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 true 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-11-20 17:02:13.869142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.083 [2024-11-20 17:02:13.869205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.083 [2024-11-20 17:02:13.869231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.083 [2024-11-20 17:02:13.869248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.083 [2024-11-20 17:02:13.872124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.083 [2024-11-20 17:02:13.872172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.083 BaseBdev3 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 [2024-11-20 17:02:13.877227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.083 [2024-11-20 17:02:13.879613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.083 [2024-11-20 17:02:13.879722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.083 [2024-11-20 17:02:13.880018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.083 [2024-11-20 17:02:13.880037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.083 [2024-11-20 17:02:13.880347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:50.083 [2024-11-20 17:02:13.880593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.083 [2024-11-20 17:02:13.880612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.083 [2024-11-20 17:02:13.880819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.083 "name": "raid_bdev1", 00:09:50.083 "uuid": "6f8370ca-aa63-4bf2-bff1-684a60672bc2", 00:09:50.083 "strip_size_kb": 0, 00:09:50.083 "state": "online", 00:09:50.083 "raid_level": "raid1", 00:09:50.083 "superblock": true, 00:09:50.083 "num_base_bdevs": 3, 00:09:50.083 "num_base_bdevs_discovered": 3, 00:09:50.083 "num_base_bdevs_operational": 3, 00:09:50.083 "base_bdevs_list": [ 00:09:50.083 { 00:09:50.083 "name": "BaseBdev1", 00:09:50.083 "uuid": "f76eba4a-85d6-5a6d-8ace-68ff612e3298", 00:09:50.083 "is_configured": true, 00:09:50.083 "data_offset": 2048, 00:09:50.083 "data_size": 63488 00:09:50.083 }, 00:09:50.083 { 00:09:50.083 "name": "BaseBdev2", 00:09:50.083 "uuid": "2bda60d7-3680-5003-8a13-84573474b7ca", 00:09:50.083 "is_configured": true, 00:09:50.083 "data_offset": 2048, 00:09:50.083 "data_size": 63488 00:09:50.083 }, 00:09:50.083 { 00:09:50.083 "name": "BaseBdev3", 00:09:50.083 "uuid": "00ac83c0-99b7-56e5-948c-93762c7c9084", 00:09:50.083 "is_configured": true, 00:09:50.083 "data_offset": 2048, 00:09:50.083 "data_size": 63488 00:09:50.083 } 00:09:50.083 ] 00:09:50.083 }' 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.083 17:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.649 17:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.649 17:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.649 [2024-11-20 17:02:14.510778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.585 [2024-11-20 17:02:15.389746] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.585 [2024-11-20 17:02:15.389823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.585 [2024-11-20 17:02:15.390089] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.585 "name": "raid_bdev1", 00:09:51.585 "uuid": "6f8370ca-aa63-4bf2-bff1-684a60672bc2", 00:09:51.585 "strip_size_kb": 0, 00:09:51.585 "state": "online", 00:09:51.585 "raid_level": "raid1", 00:09:51.585 "superblock": true, 00:09:51.585 "num_base_bdevs": 3, 00:09:51.585 "num_base_bdevs_discovered": 2, 00:09:51.585 "num_base_bdevs_operational": 2, 00:09:51.585 "base_bdevs_list": [ 00:09:51.585 { 00:09:51.585 "name": null, 00:09:51.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.585 "is_configured": false, 00:09:51.585 "data_offset": 0, 00:09:51.585 "data_size": 63488 00:09:51.585 }, 00:09:51.585 { 00:09:51.585 "name": "BaseBdev2", 00:09:51.585 "uuid": "2bda60d7-3680-5003-8a13-84573474b7ca", 00:09:51.585 "is_configured": true, 00:09:51.585 "data_offset": 2048, 00:09:51.585 "data_size": 63488 00:09:51.585 }, 00:09:51.585 { 00:09:51.585 "name": "BaseBdev3", 00:09:51.585 "uuid": "00ac83c0-99b7-56e5-948c-93762c7c9084", 00:09:51.585 "is_configured": true, 00:09:51.585 "data_offset": 2048, 00:09:51.585 "data_size": 63488 00:09:51.585 } 00:09:51.585 ] 00:09:51.585 }' 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.585 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.152 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.152 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.152 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.152 [2024-11-20 17:02:15.927450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.152 [2024-11-20 17:02:15.927491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.153 [2024-11-20 17:02:15.930825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.153 [2024-11-20 17:02:15.930898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.153 [2024-11-20 17:02:15.931006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.153 [2024-11-20 17:02:15.931030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.153 { 00:09:52.153 "results": [ 00:09:52.153 { 00:09:52.153 "job": "raid_bdev1", 00:09:52.153 "core_mask": "0x1", 00:09:52.153 "workload": "randrw", 00:09:52.153 "percentage": 50, 00:09:52.153 "status": "finished", 00:09:52.153 "queue_depth": 1, 00:09:52.153 "io_size": 131072, 00:09:52.153 "runtime": 1.414443, 00:09:52.153 "iops": 11163.404958701058, 00:09:52.153 "mibps": 1395.4256198376322, 00:09:52.153 "io_failed": 0, 00:09:52.153 "io_timeout": 0, 00:09:52.153 "avg_latency_us": 85.37011917784558, 00:09:52.153 "min_latency_us": 40.02909090909091, 00:09:52.153 "max_latency_us": 1854.370909090909 00:09:52.153 } 00:09:52.153 ], 00:09:52.153 "core_count": 1 00:09:52.153 } 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69131 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69131 ']' 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69131 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69131 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.153 killing process with pid 69131 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69131' 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69131 00:09:52.153 [2024-11-20 17:02:15.965459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.153 17:02:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69131 00:09:52.411 [2024-11-20 17:02:16.165740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dKcFc2FXjc 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:53.787 ************************************ 00:09:53.787 END TEST raid_write_error_test 00:09:53.787 ************************************ 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:53.787 00:09:53.787 real 0m4.743s 00:09:53.787 user 0m5.882s 00:09:53.787 sys 0m0.623s 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.787 17:02:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.787 17:02:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:53.787 17:02:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.787 17:02:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:53.787 17:02:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.787 17:02:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.787 17:02:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.787 ************************************ 00:09:53.787 START TEST raid_state_function_test 00:09:53.787 ************************************ 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.787 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69280 00:09:53.788 Process raid pid: 69280 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69280' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69280 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69280 ']' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.788 17:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.788 [2024-11-20 17:02:17.435206] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:09:53.788 [2024-11-20 17:02:17.435382] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.788 [2024-11-20 17:02:17.622966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.047 [2024-11-20 17:02:17.756827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.306 [2024-11-20 17:02:17.973584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.306 [2024-11-20 17:02:17.973625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.565 [2024-11-20 17:02:18.389311] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.565 [2024-11-20 17:02:18.389386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.565 [2024-11-20 17:02:18.389403] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.565 [2024-11-20 17:02:18.389420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.565 [2024-11-20 17:02:18.389430] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.565 [2024-11-20 17:02:18.389444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.565 [2024-11-20 17:02:18.389453] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.565 [2024-11-20 17:02:18.389467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.565 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.824 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.824 "name": "Existed_Raid", 00:09:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.824 "strip_size_kb": 64, 00:09:54.824 "state": "configuring", 00:09:54.824 "raid_level": "raid0", 00:09:54.824 "superblock": false, 00:09:54.824 "num_base_bdevs": 4, 00:09:54.824 "num_base_bdevs_discovered": 0, 00:09:54.824 "num_base_bdevs_operational": 4, 00:09:54.824 "base_bdevs_list": [ 00:09:54.824 { 00:09:54.824 "name": "BaseBdev1", 00:09:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.824 "is_configured": false, 00:09:54.824 "data_offset": 0, 00:09:54.824 "data_size": 0 00:09:54.824 }, 00:09:54.824 { 00:09:54.824 "name": "BaseBdev2", 00:09:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.824 "is_configured": false, 00:09:54.824 "data_offset": 0, 00:09:54.824 "data_size": 0 00:09:54.824 }, 00:09:54.824 { 00:09:54.824 "name": "BaseBdev3", 00:09:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.824 "is_configured": false, 00:09:54.824 "data_offset": 0, 00:09:54.824 "data_size": 0 00:09:54.824 }, 00:09:54.824 { 00:09:54.824 "name": "BaseBdev4", 00:09:54.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.824 "is_configured": false, 00:09:54.824 "data_offset": 0, 00:09:54.824 "data_size": 0 00:09:54.824 } 00:09:54.824 ] 00:09:54.824 }' 00:09:54.824 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.824 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.083 [2024-11-20 17:02:18.921490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.083 [2024-11-20 17:02:18.921553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.083 [2024-11-20 17:02:18.929442] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.083 [2024-11-20 17:02:18.929507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.083 [2024-11-20 17:02:18.929522] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.083 [2024-11-20 17:02:18.929537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.083 [2024-11-20 17:02:18.929546] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.083 [2024-11-20 17:02:18.929560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.083 [2024-11-20 17:02:18.929569] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.083 [2024-11-20 17:02:18.929582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.083 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.342 [2024-11-20 17:02:18.976111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.342 BaseBdev1 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.342 17:02:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.342 [ 00:09:55.342 { 00:09:55.342 "name": "BaseBdev1", 00:09:55.342 "aliases": [ 00:09:55.342 "9c22ff5f-1a86-451c-8560-b07845cd6d58" 00:09:55.342 ], 00:09:55.342 "product_name": "Malloc disk", 00:09:55.342 "block_size": 512, 00:09:55.342 "num_blocks": 65536, 00:09:55.342 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:55.342 "assigned_rate_limits": { 00:09:55.342 "rw_ios_per_sec": 0, 00:09:55.342 "rw_mbytes_per_sec": 0, 00:09:55.342 "r_mbytes_per_sec": 0, 00:09:55.342 "w_mbytes_per_sec": 0 00:09:55.342 }, 00:09:55.342 "claimed": true, 00:09:55.342 "claim_type": "exclusive_write", 00:09:55.343 "zoned": false, 00:09:55.343 "supported_io_types": { 00:09:55.343 "read": true, 00:09:55.343 "write": true, 00:09:55.343 "unmap": true, 00:09:55.343 "flush": true, 00:09:55.343 "reset": true, 00:09:55.343 "nvme_admin": false, 00:09:55.343 "nvme_io": false, 00:09:55.343 "nvme_io_md": false, 00:09:55.343 "write_zeroes": true, 00:09:55.343 "zcopy": true, 00:09:55.343 "get_zone_info": false, 00:09:55.343 "zone_management": false, 00:09:55.343 "zone_append": false, 00:09:55.343 "compare": false, 00:09:55.343 "compare_and_write": false, 00:09:55.343 "abort": true, 00:09:55.343 "seek_hole": false, 00:09:55.343 "seek_data": false, 00:09:55.343 "copy": true, 00:09:55.343 "nvme_iov_md": false 00:09:55.343 }, 00:09:55.343 "memory_domains": [ 00:09:55.343 { 00:09:55.343 "dma_device_id": "system", 00:09:55.343 "dma_device_type": 1 00:09:55.343 }, 00:09:55.343 { 00:09:55.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.343 "dma_device_type": 2 00:09:55.343 } 00:09:55.343 ], 00:09:55.343 "driver_specific": {} 00:09:55.343 } 00:09:55.343 ] 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.343 "name": "Existed_Raid", 00:09:55.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.343 "strip_size_kb": 64, 00:09:55.343 "state": "configuring", 00:09:55.343 "raid_level": "raid0", 00:09:55.343 "superblock": false, 00:09:55.343 "num_base_bdevs": 4, 00:09:55.343 "num_base_bdevs_discovered": 1, 00:09:55.343 "num_base_bdevs_operational": 4, 00:09:55.343 "base_bdevs_list": [ 00:09:55.343 { 00:09:55.343 "name": "BaseBdev1", 00:09:55.343 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:55.343 "is_configured": true, 00:09:55.343 "data_offset": 0, 00:09:55.343 "data_size": 65536 00:09:55.343 }, 00:09:55.343 { 00:09:55.343 "name": "BaseBdev2", 00:09:55.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.343 "is_configured": false, 00:09:55.343 "data_offset": 0, 00:09:55.343 "data_size": 0 00:09:55.343 }, 00:09:55.343 { 00:09:55.343 "name": "BaseBdev3", 00:09:55.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.343 "is_configured": false, 00:09:55.343 "data_offset": 0, 00:09:55.343 "data_size": 0 00:09:55.343 }, 00:09:55.343 { 00:09:55.343 "name": "BaseBdev4", 00:09:55.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.343 "is_configured": false, 00:09:55.343 "data_offset": 0, 00:09:55.343 "data_size": 0 00:09:55.343 } 00:09:55.343 ] 00:09:55.343 }' 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.343 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.937 [2024-11-20 17:02:19.520333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.937 [2024-11-20 17:02:19.520397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.937 [2024-11-20 17:02:19.528380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.937 [2024-11-20 17:02:19.530910] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.937 [2024-11-20 17:02:19.530965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.937 [2024-11-20 17:02:19.530981] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.937 [2024-11-20 17:02:19.531006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.937 [2024-11-20 17:02:19.531016] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:55.937 [2024-11-20 17:02:19.531029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.937 "name": "Existed_Raid", 00:09:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.937 "strip_size_kb": 64, 00:09:55.937 "state": "configuring", 00:09:55.937 "raid_level": "raid0", 00:09:55.937 "superblock": false, 00:09:55.937 "num_base_bdevs": 4, 00:09:55.937 "num_base_bdevs_discovered": 1, 00:09:55.937 "num_base_bdevs_operational": 4, 00:09:55.937 "base_bdevs_list": [ 00:09:55.937 { 00:09:55.937 "name": "BaseBdev1", 00:09:55.937 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:55.937 "is_configured": true, 00:09:55.937 "data_offset": 0, 00:09:55.937 "data_size": 65536 00:09:55.937 }, 00:09:55.937 { 00:09:55.937 "name": "BaseBdev2", 00:09:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.937 "is_configured": false, 00:09:55.937 "data_offset": 0, 00:09:55.937 "data_size": 0 00:09:55.937 }, 00:09:55.937 { 00:09:55.937 "name": "BaseBdev3", 00:09:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.937 "is_configured": false, 00:09:55.937 "data_offset": 0, 00:09:55.937 "data_size": 0 00:09:55.937 }, 00:09:55.937 { 00:09:55.937 "name": "BaseBdev4", 00:09:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.937 "is_configured": false, 00:09:55.937 "data_offset": 0, 00:09:55.937 "data_size": 0 00:09:55.937 } 00:09:55.937 ] 00:09:55.937 }' 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.937 17:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.222 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.222 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.222 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 [2024-11-20 17:02:20.099319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.480 BaseBdev2 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.480 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 [ 00:09:56.480 { 00:09:56.480 "name": "BaseBdev2", 00:09:56.480 "aliases": [ 00:09:56.480 "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce" 00:09:56.480 ], 00:09:56.480 "product_name": "Malloc disk", 00:09:56.480 "block_size": 512, 00:09:56.480 "num_blocks": 65536, 00:09:56.480 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:56.480 "assigned_rate_limits": { 00:09:56.480 "rw_ios_per_sec": 0, 00:09:56.480 "rw_mbytes_per_sec": 0, 00:09:56.480 "r_mbytes_per_sec": 0, 00:09:56.480 "w_mbytes_per_sec": 0 00:09:56.480 }, 00:09:56.480 "claimed": true, 00:09:56.480 "claim_type": "exclusive_write", 00:09:56.480 "zoned": false, 00:09:56.480 "supported_io_types": { 00:09:56.480 "read": true, 00:09:56.480 "write": true, 00:09:56.480 "unmap": true, 00:09:56.480 "flush": true, 00:09:56.480 "reset": true, 00:09:56.480 "nvme_admin": false, 00:09:56.480 "nvme_io": false, 00:09:56.480 "nvme_io_md": false, 00:09:56.480 "write_zeroes": true, 00:09:56.480 "zcopy": true, 00:09:56.480 "get_zone_info": false, 00:09:56.480 "zone_management": false, 00:09:56.480 "zone_append": false, 00:09:56.480 "compare": false, 00:09:56.481 "compare_and_write": false, 00:09:56.481 "abort": true, 00:09:56.481 "seek_hole": false, 00:09:56.481 "seek_data": false, 00:09:56.481 "copy": true, 00:09:56.481 "nvme_iov_md": false 00:09:56.481 }, 00:09:56.481 "memory_domains": [ 00:09:56.481 { 00:09:56.481 "dma_device_id": "system", 00:09:56.481 "dma_device_type": 1 00:09:56.481 }, 00:09:56.481 { 00:09:56.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.481 "dma_device_type": 2 00:09:56.481 } 00:09:56.481 ], 00:09:56.481 "driver_specific": {} 00:09:56.481 } 00:09:56.481 ] 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.481 "name": "Existed_Raid", 00:09:56.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.481 "strip_size_kb": 64, 00:09:56.481 "state": "configuring", 00:09:56.481 "raid_level": "raid0", 00:09:56.481 "superblock": false, 00:09:56.481 "num_base_bdevs": 4, 00:09:56.481 "num_base_bdevs_discovered": 2, 00:09:56.481 "num_base_bdevs_operational": 4, 00:09:56.481 "base_bdevs_list": [ 00:09:56.481 { 00:09:56.481 "name": "BaseBdev1", 00:09:56.481 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:56.481 "is_configured": true, 00:09:56.481 "data_offset": 0, 00:09:56.481 "data_size": 65536 00:09:56.481 }, 00:09:56.481 { 00:09:56.481 "name": "BaseBdev2", 00:09:56.481 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:56.481 "is_configured": true, 00:09:56.481 "data_offset": 0, 00:09:56.481 "data_size": 65536 00:09:56.481 }, 00:09:56.481 { 00:09:56.481 "name": "BaseBdev3", 00:09:56.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.481 "is_configured": false, 00:09:56.481 "data_offset": 0, 00:09:56.481 "data_size": 0 00:09:56.481 }, 00:09:56.481 { 00:09:56.481 "name": "BaseBdev4", 00:09:56.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.481 "is_configured": false, 00:09:56.481 "data_offset": 0, 00:09:56.481 "data_size": 0 00:09:56.481 } 00:09:56.481 ] 00:09:56.481 }' 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.481 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.047 [2024-11-20 17:02:20.679151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.047 BaseBdev3 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.047 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 [ 00:09:57.048 { 00:09:57.048 "name": "BaseBdev3", 00:09:57.048 "aliases": [ 00:09:57.048 "1bf8ef43-979c-4a60-aae2-558c57adede8" 00:09:57.048 ], 00:09:57.048 "product_name": "Malloc disk", 00:09:57.048 "block_size": 512, 00:09:57.048 "num_blocks": 65536, 00:09:57.048 "uuid": "1bf8ef43-979c-4a60-aae2-558c57adede8", 00:09:57.048 "assigned_rate_limits": { 00:09:57.048 "rw_ios_per_sec": 0, 00:09:57.048 "rw_mbytes_per_sec": 0, 00:09:57.048 "r_mbytes_per_sec": 0, 00:09:57.048 "w_mbytes_per_sec": 0 00:09:57.048 }, 00:09:57.048 "claimed": true, 00:09:57.048 "claim_type": "exclusive_write", 00:09:57.048 "zoned": false, 00:09:57.048 "supported_io_types": { 00:09:57.048 "read": true, 00:09:57.048 "write": true, 00:09:57.048 "unmap": true, 00:09:57.048 "flush": true, 00:09:57.048 "reset": true, 00:09:57.048 "nvme_admin": false, 00:09:57.048 "nvme_io": false, 00:09:57.048 "nvme_io_md": false, 00:09:57.048 "write_zeroes": true, 00:09:57.048 "zcopy": true, 00:09:57.048 "get_zone_info": false, 00:09:57.048 "zone_management": false, 00:09:57.048 "zone_append": false, 00:09:57.048 "compare": false, 00:09:57.048 "compare_and_write": false, 00:09:57.048 "abort": true, 00:09:57.048 "seek_hole": false, 00:09:57.048 "seek_data": false, 00:09:57.048 "copy": true, 00:09:57.048 "nvme_iov_md": false 00:09:57.048 }, 00:09:57.048 "memory_domains": [ 00:09:57.048 { 00:09:57.048 "dma_device_id": "system", 00:09:57.048 "dma_device_type": 1 00:09:57.048 }, 00:09:57.048 { 00:09:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.048 "dma_device_type": 2 00:09:57.048 } 00:09:57.048 ], 00:09:57.048 "driver_specific": {} 00:09:57.048 } 00:09:57.048 ] 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.048 "name": "Existed_Raid", 00:09:57.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.048 "strip_size_kb": 64, 00:09:57.048 "state": "configuring", 00:09:57.048 "raid_level": "raid0", 00:09:57.048 "superblock": false, 00:09:57.048 "num_base_bdevs": 4, 00:09:57.048 "num_base_bdevs_discovered": 3, 00:09:57.048 "num_base_bdevs_operational": 4, 00:09:57.048 "base_bdevs_list": [ 00:09:57.048 { 00:09:57.048 "name": "BaseBdev1", 00:09:57.048 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:57.048 "is_configured": true, 00:09:57.048 "data_offset": 0, 00:09:57.048 "data_size": 65536 00:09:57.048 }, 00:09:57.048 { 00:09:57.048 "name": "BaseBdev2", 00:09:57.048 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:57.048 "is_configured": true, 00:09:57.048 "data_offset": 0, 00:09:57.048 "data_size": 65536 00:09:57.048 }, 00:09:57.048 { 00:09:57.048 "name": "BaseBdev3", 00:09:57.048 "uuid": "1bf8ef43-979c-4a60-aae2-558c57adede8", 00:09:57.048 "is_configured": true, 00:09:57.048 "data_offset": 0, 00:09:57.048 "data_size": 65536 00:09:57.048 }, 00:09:57.048 { 00:09:57.048 "name": "BaseBdev4", 00:09:57.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.048 "is_configured": false, 00:09:57.048 "data_offset": 0, 00:09:57.048 "data_size": 0 00:09:57.048 } 00:09:57.048 ] 00:09:57.048 }' 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.048 17:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.614 [2024-11-20 17:02:21.289942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.614 [2024-11-20 17:02:21.290000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.614 [2024-11-20 17:02:21.290015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:57.614 [2024-11-20 17:02:21.290369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:57.614 [2024-11-20 17:02:21.290600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.614 [2024-11-20 17:02:21.290632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:57.614 [2024-11-20 17:02:21.290941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.614 BaseBdev4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.614 [ 00:09:57.614 { 00:09:57.614 "name": "BaseBdev4", 00:09:57.614 "aliases": [ 00:09:57.614 "88721ebb-6814-4e3a-ac53-065c2b68b285" 00:09:57.614 ], 00:09:57.614 "product_name": "Malloc disk", 00:09:57.614 "block_size": 512, 00:09:57.614 "num_blocks": 65536, 00:09:57.614 "uuid": "88721ebb-6814-4e3a-ac53-065c2b68b285", 00:09:57.614 "assigned_rate_limits": { 00:09:57.614 "rw_ios_per_sec": 0, 00:09:57.614 "rw_mbytes_per_sec": 0, 00:09:57.614 "r_mbytes_per_sec": 0, 00:09:57.614 "w_mbytes_per_sec": 0 00:09:57.614 }, 00:09:57.614 "claimed": true, 00:09:57.614 "claim_type": "exclusive_write", 00:09:57.614 "zoned": false, 00:09:57.614 "supported_io_types": { 00:09:57.614 "read": true, 00:09:57.614 "write": true, 00:09:57.614 "unmap": true, 00:09:57.614 "flush": true, 00:09:57.614 "reset": true, 00:09:57.614 "nvme_admin": false, 00:09:57.614 "nvme_io": false, 00:09:57.614 "nvme_io_md": false, 00:09:57.614 "write_zeroes": true, 00:09:57.614 "zcopy": true, 00:09:57.614 "get_zone_info": false, 00:09:57.614 "zone_management": false, 00:09:57.614 "zone_append": false, 00:09:57.614 "compare": false, 00:09:57.614 "compare_and_write": false, 00:09:57.614 "abort": true, 00:09:57.614 "seek_hole": false, 00:09:57.614 "seek_data": false, 00:09:57.614 "copy": true, 00:09:57.614 "nvme_iov_md": false 00:09:57.614 }, 00:09:57.614 "memory_domains": [ 00:09:57.614 { 00:09:57.614 "dma_device_id": "system", 00:09:57.614 "dma_device_type": 1 00:09:57.614 }, 00:09:57.614 { 00:09:57.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.614 "dma_device_type": 2 00:09:57.614 } 00:09:57.614 ], 00:09:57.614 "driver_specific": {} 00:09:57.614 } 00:09:57.614 ] 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.614 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.615 "name": "Existed_Raid", 00:09:57.615 "uuid": "43da6302-1923-4a5a-a705-ac28c23d082e", 00:09:57.615 "strip_size_kb": 64, 00:09:57.615 "state": "online", 00:09:57.615 "raid_level": "raid0", 00:09:57.615 "superblock": false, 00:09:57.615 "num_base_bdevs": 4, 00:09:57.615 "num_base_bdevs_discovered": 4, 00:09:57.615 "num_base_bdevs_operational": 4, 00:09:57.615 "base_bdevs_list": [ 00:09:57.615 { 00:09:57.615 "name": "BaseBdev1", 00:09:57.615 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:57.615 "is_configured": true, 00:09:57.615 "data_offset": 0, 00:09:57.615 "data_size": 65536 00:09:57.615 }, 00:09:57.615 { 00:09:57.615 "name": "BaseBdev2", 00:09:57.615 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:57.615 "is_configured": true, 00:09:57.615 "data_offset": 0, 00:09:57.615 "data_size": 65536 00:09:57.615 }, 00:09:57.615 { 00:09:57.615 "name": "BaseBdev3", 00:09:57.615 "uuid": "1bf8ef43-979c-4a60-aae2-558c57adede8", 00:09:57.615 "is_configured": true, 00:09:57.615 "data_offset": 0, 00:09:57.615 "data_size": 65536 00:09:57.615 }, 00:09:57.615 { 00:09:57.615 "name": "BaseBdev4", 00:09:57.615 "uuid": "88721ebb-6814-4e3a-ac53-065c2b68b285", 00:09:57.615 "is_configured": true, 00:09:57.615 "data_offset": 0, 00:09:57.615 "data_size": 65536 00:09:57.615 } 00:09:57.615 ] 00:09:57.615 }' 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.615 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.182 [2024-11-20 17:02:21.830594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.182 "name": "Existed_Raid", 00:09:58.182 "aliases": [ 00:09:58.182 "43da6302-1923-4a5a-a705-ac28c23d082e" 00:09:58.182 ], 00:09:58.182 "product_name": "Raid Volume", 00:09:58.182 "block_size": 512, 00:09:58.182 "num_blocks": 262144, 00:09:58.182 "uuid": "43da6302-1923-4a5a-a705-ac28c23d082e", 00:09:58.182 "assigned_rate_limits": { 00:09:58.182 "rw_ios_per_sec": 0, 00:09:58.182 "rw_mbytes_per_sec": 0, 00:09:58.182 "r_mbytes_per_sec": 0, 00:09:58.182 "w_mbytes_per_sec": 0 00:09:58.182 }, 00:09:58.182 "claimed": false, 00:09:58.182 "zoned": false, 00:09:58.182 "supported_io_types": { 00:09:58.182 "read": true, 00:09:58.182 "write": true, 00:09:58.182 "unmap": true, 00:09:58.182 "flush": true, 00:09:58.182 "reset": true, 00:09:58.182 "nvme_admin": false, 00:09:58.182 "nvme_io": false, 00:09:58.182 "nvme_io_md": false, 00:09:58.182 "write_zeroes": true, 00:09:58.182 "zcopy": false, 00:09:58.182 "get_zone_info": false, 00:09:58.182 "zone_management": false, 00:09:58.182 "zone_append": false, 00:09:58.182 "compare": false, 00:09:58.182 "compare_and_write": false, 00:09:58.182 "abort": false, 00:09:58.182 "seek_hole": false, 00:09:58.182 "seek_data": false, 00:09:58.182 "copy": false, 00:09:58.182 "nvme_iov_md": false 00:09:58.182 }, 00:09:58.182 "memory_domains": [ 00:09:58.182 { 00:09:58.182 "dma_device_id": "system", 00:09:58.182 "dma_device_type": 1 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.182 "dma_device_type": 2 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "system", 00:09:58.182 "dma_device_type": 1 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.182 "dma_device_type": 2 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "system", 00:09:58.182 "dma_device_type": 1 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.182 "dma_device_type": 2 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "system", 00:09:58.182 "dma_device_type": 1 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.182 "dma_device_type": 2 00:09:58.182 } 00:09:58.182 ], 00:09:58.182 "driver_specific": { 00:09:58.182 "raid": { 00:09:58.182 "uuid": "43da6302-1923-4a5a-a705-ac28c23d082e", 00:09:58.182 "strip_size_kb": 64, 00:09:58.182 "state": "online", 00:09:58.182 "raid_level": "raid0", 00:09:58.182 "superblock": false, 00:09:58.182 "num_base_bdevs": 4, 00:09:58.182 "num_base_bdevs_discovered": 4, 00:09:58.182 "num_base_bdevs_operational": 4, 00:09:58.182 "base_bdevs_list": [ 00:09:58.182 { 00:09:58.182 "name": "BaseBdev1", 00:09:58.182 "uuid": "9c22ff5f-1a86-451c-8560-b07845cd6d58", 00:09:58.182 "is_configured": true, 00:09:58.182 "data_offset": 0, 00:09:58.182 "data_size": 65536 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "name": "BaseBdev2", 00:09:58.182 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:58.182 "is_configured": true, 00:09:58.182 "data_offset": 0, 00:09:58.182 "data_size": 65536 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "name": "BaseBdev3", 00:09:58.182 "uuid": "1bf8ef43-979c-4a60-aae2-558c57adede8", 00:09:58.182 "is_configured": true, 00:09:58.182 "data_offset": 0, 00:09:58.182 "data_size": 65536 00:09:58.182 }, 00:09:58.182 { 00:09:58.182 "name": "BaseBdev4", 00:09:58.182 "uuid": "88721ebb-6814-4e3a-ac53-065c2b68b285", 00:09:58.182 "is_configured": true, 00:09:58.182 "data_offset": 0, 00:09:58.182 "data_size": 65536 00:09:58.182 } 00:09:58.182 ] 00:09:58.182 } 00:09:58.182 } 00:09:58.182 }' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:58.182 BaseBdev2 00:09:58.182 BaseBdev3 00:09:58.182 BaseBdev4' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.182 17:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.182 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.183 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.441 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.441 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.441 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.441 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 [2024-11-20 17:02:22.190287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.442 [2024-11-20 17:02:22.190325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.442 [2024-11-20 17:02:22.190388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.442 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.700 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.700 "name": "Existed_Raid", 00:09:58.700 "uuid": "43da6302-1923-4a5a-a705-ac28c23d082e", 00:09:58.700 "strip_size_kb": 64, 00:09:58.700 "state": "offline", 00:09:58.700 "raid_level": "raid0", 00:09:58.700 "superblock": false, 00:09:58.700 "num_base_bdevs": 4, 00:09:58.700 "num_base_bdevs_discovered": 3, 00:09:58.700 "num_base_bdevs_operational": 3, 00:09:58.700 "base_bdevs_list": [ 00:09:58.700 { 00:09:58.700 "name": null, 00:09:58.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.700 "is_configured": false, 00:09:58.700 "data_offset": 0, 00:09:58.700 "data_size": 65536 00:09:58.700 }, 00:09:58.700 { 00:09:58.700 "name": "BaseBdev2", 00:09:58.700 "uuid": "f475b76b-6bba-4c75-a7d7-b840d0c4f2ce", 00:09:58.700 "is_configured": true, 00:09:58.700 "data_offset": 0, 00:09:58.700 "data_size": 65536 00:09:58.700 }, 00:09:58.700 { 00:09:58.700 "name": "BaseBdev3", 00:09:58.700 "uuid": "1bf8ef43-979c-4a60-aae2-558c57adede8", 00:09:58.700 "is_configured": true, 00:09:58.700 "data_offset": 0, 00:09:58.700 "data_size": 65536 00:09:58.700 }, 00:09:58.700 { 00:09:58.700 "name": "BaseBdev4", 00:09:58.700 "uuid": "88721ebb-6814-4e3a-ac53-065c2b68b285", 00:09:58.700 "is_configured": true, 00:09:58.700 "data_offset": 0, 00:09:58.700 "data_size": 65536 00:09:58.700 } 00:09:58.700 ] 00:09:58.700 }' 00:09:58.700 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.700 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.958 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.216 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.216 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.216 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.216 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.216 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.216 [2024-11-20 17:02:22.846013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.217 17:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.217 [2024-11-20 17:02:22.994622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 [2024-11-20 17:02:23.141796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:59.475 [2024-11-20 17:02:23.141852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 BaseBdev2 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.475 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.733 [ 00:09:59.733 { 00:09:59.733 "name": "BaseBdev2", 00:09:59.733 "aliases": [ 00:09:59.733 "d65e990d-1350-4915-8774-330f7c04344f" 00:09:59.734 ], 00:09:59.734 "product_name": "Malloc disk", 00:09:59.734 "block_size": 512, 00:09:59.734 "num_blocks": 65536, 00:09:59.734 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:09:59.734 "assigned_rate_limits": { 00:09:59.734 "rw_ios_per_sec": 0, 00:09:59.734 "rw_mbytes_per_sec": 0, 00:09:59.734 "r_mbytes_per_sec": 0, 00:09:59.734 "w_mbytes_per_sec": 0 00:09:59.734 }, 00:09:59.734 "claimed": false, 00:09:59.734 "zoned": false, 00:09:59.734 "supported_io_types": { 00:09:59.734 "read": true, 00:09:59.734 "write": true, 00:09:59.734 "unmap": true, 00:09:59.734 "flush": true, 00:09:59.734 "reset": true, 00:09:59.734 "nvme_admin": false, 00:09:59.734 "nvme_io": false, 00:09:59.734 "nvme_io_md": false, 00:09:59.734 "write_zeroes": true, 00:09:59.734 "zcopy": true, 00:09:59.734 "get_zone_info": false, 00:09:59.734 "zone_management": false, 00:09:59.734 "zone_append": false, 00:09:59.734 "compare": false, 00:09:59.734 "compare_and_write": false, 00:09:59.734 "abort": true, 00:09:59.734 "seek_hole": false, 00:09:59.734 "seek_data": false, 00:09:59.734 "copy": true, 00:09:59.734 "nvme_iov_md": false 00:09:59.734 }, 00:09:59.734 "memory_domains": [ 00:09:59.734 { 00:09:59.734 "dma_device_id": "system", 00:09:59.734 "dma_device_type": 1 00:09:59.734 }, 00:09:59.734 { 00:09:59.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.734 "dma_device_type": 2 00:09:59.734 } 00:09:59.734 ], 00:09:59.734 "driver_specific": {} 00:09:59.734 } 00:09:59.734 ] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 BaseBdev3 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 [ 00:09:59.734 { 00:09:59.734 "name": "BaseBdev3", 00:09:59.734 "aliases": [ 00:09:59.734 "506ee4c8-71f0-4284-90c0-d307075fe3a1" 00:09:59.734 ], 00:09:59.734 "product_name": "Malloc disk", 00:09:59.734 "block_size": 512, 00:09:59.734 "num_blocks": 65536, 00:09:59.734 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:09:59.734 "assigned_rate_limits": { 00:09:59.734 "rw_ios_per_sec": 0, 00:09:59.734 "rw_mbytes_per_sec": 0, 00:09:59.734 "r_mbytes_per_sec": 0, 00:09:59.734 "w_mbytes_per_sec": 0 00:09:59.734 }, 00:09:59.734 "claimed": false, 00:09:59.734 "zoned": false, 00:09:59.734 "supported_io_types": { 00:09:59.734 "read": true, 00:09:59.734 "write": true, 00:09:59.734 "unmap": true, 00:09:59.734 "flush": true, 00:09:59.734 "reset": true, 00:09:59.734 "nvme_admin": false, 00:09:59.734 "nvme_io": false, 00:09:59.734 "nvme_io_md": false, 00:09:59.734 "write_zeroes": true, 00:09:59.734 "zcopy": true, 00:09:59.734 "get_zone_info": false, 00:09:59.734 "zone_management": false, 00:09:59.734 "zone_append": false, 00:09:59.734 "compare": false, 00:09:59.734 "compare_and_write": false, 00:09:59.734 "abort": true, 00:09:59.734 "seek_hole": false, 00:09:59.734 "seek_data": false, 00:09:59.734 "copy": true, 00:09:59.734 "nvme_iov_md": false 00:09:59.734 }, 00:09:59.734 "memory_domains": [ 00:09:59.734 { 00:09:59.734 "dma_device_id": "system", 00:09:59.734 "dma_device_type": 1 00:09:59.734 }, 00:09:59.734 { 00:09:59.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.734 "dma_device_type": 2 00:09:59.734 } 00:09:59.734 ], 00:09:59.734 "driver_specific": {} 00:09:59.734 } 00:09:59.734 ] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 BaseBdev4 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.734 [ 00:09:59.734 { 00:09:59.734 "name": "BaseBdev4", 00:09:59.734 "aliases": [ 00:09:59.734 "c6f2aab2-42fe-4092-abdb-afe0a291b570" 00:09:59.734 ], 00:09:59.734 "product_name": "Malloc disk", 00:09:59.734 "block_size": 512, 00:09:59.734 "num_blocks": 65536, 00:09:59.734 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:09:59.734 "assigned_rate_limits": { 00:09:59.734 "rw_ios_per_sec": 0, 00:09:59.734 "rw_mbytes_per_sec": 0, 00:09:59.734 "r_mbytes_per_sec": 0, 00:09:59.734 "w_mbytes_per_sec": 0 00:09:59.734 }, 00:09:59.734 "claimed": false, 00:09:59.734 "zoned": false, 00:09:59.734 "supported_io_types": { 00:09:59.734 "read": true, 00:09:59.734 "write": true, 00:09:59.734 "unmap": true, 00:09:59.734 "flush": true, 00:09:59.734 "reset": true, 00:09:59.734 "nvme_admin": false, 00:09:59.734 "nvme_io": false, 00:09:59.734 "nvme_io_md": false, 00:09:59.734 "write_zeroes": true, 00:09:59.734 "zcopy": true, 00:09:59.734 "get_zone_info": false, 00:09:59.734 "zone_management": false, 00:09:59.734 "zone_append": false, 00:09:59.734 "compare": false, 00:09:59.734 "compare_and_write": false, 00:09:59.734 "abort": true, 00:09:59.734 "seek_hole": false, 00:09:59.734 "seek_data": false, 00:09:59.734 "copy": true, 00:09:59.734 "nvme_iov_md": false 00:09:59.734 }, 00:09:59.734 "memory_domains": [ 00:09:59.734 { 00:09:59.734 "dma_device_id": "system", 00:09:59.734 "dma_device_type": 1 00:09:59.734 }, 00:09:59.734 { 00:09:59.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.734 "dma_device_type": 2 00:09:59.734 } 00:09:59.734 ], 00:09:59.734 "driver_specific": {} 00:09:59.734 } 00:09:59.734 ] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.734 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.735 [2024-11-20 17:02:23.516010] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.735 [2024-11-20 17:02:23.516208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.735 [2024-11-20 17:02:23.516347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.735 [2024-11-20 17:02:23.518977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.735 [2024-11-20 17:02:23.519167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.735 "name": "Existed_Raid", 00:09:59.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.735 "strip_size_kb": 64, 00:09:59.735 "state": "configuring", 00:09:59.735 "raid_level": "raid0", 00:09:59.735 "superblock": false, 00:09:59.735 "num_base_bdevs": 4, 00:09:59.735 "num_base_bdevs_discovered": 3, 00:09:59.735 "num_base_bdevs_operational": 4, 00:09:59.735 "base_bdevs_list": [ 00:09:59.735 { 00:09:59.735 "name": "BaseBdev1", 00:09:59.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.735 "is_configured": false, 00:09:59.735 "data_offset": 0, 00:09:59.735 "data_size": 0 00:09:59.735 }, 00:09:59.735 { 00:09:59.735 "name": "BaseBdev2", 00:09:59.735 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:09:59.735 "is_configured": true, 00:09:59.735 "data_offset": 0, 00:09:59.735 "data_size": 65536 00:09:59.735 }, 00:09:59.735 { 00:09:59.735 "name": "BaseBdev3", 00:09:59.735 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:09:59.735 "is_configured": true, 00:09:59.735 "data_offset": 0, 00:09:59.735 "data_size": 65536 00:09:59.735 }, 00:09:59.735 { 00:09:59.735 "name": "BaseBdev4", 00:09:59.735 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:09:59.735 "is_configured": true, 00:09:59.735 "data_offset": 0, 00:09:59.735 "data_size": 65536 00:09:59.735 } 00:09:59.735 ] 00:09:59.735 }' 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.735 17:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.301 [2024-11-20 17:02:24.012175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.301 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.301 "name": "Existed_Raid", 00:10:00.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.301 "strip_size_kb": 64, 00:10:00.301 "state": "configuring", 00:10:00.301 "raid_level": "raid0", 00:10:00.301 "superblock": false, 00:10:00.301 "num_base_bdevs": 4, 00:10:00.301 "num_base_bdevs_discovered": 2, 00:10:00.301 "num_base_bdevs_operational": 4, 00:10:00.301 "base_bdevs_list": [ 00:10:00.301 { 00:10:00.301 "name": "BaseBdev1", 00:10:00.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.301 "is_configured": false, 00:10:00.301 "data_offset": 0, 00:10:00.301 "data_size": 0 00:10:00.301 }, 00:10:00.301 { 00:10:00.301 "name": null, 00:10:00.301 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:00.301 "is_configured": false, 00:10:00.301 "data_offset": 0, 00:10:00.302 "data_size": 65536 00:10:00.302 }, 00:10:00.302 { 00:10:00.302 "name": "BaseBdev3", 00:10:00.302 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:00.302 "is_configured": true, 00:10:00.302 "data_offset": 0, 00:10:00.302 "data_size": 65536 00:10:00.302 }, 00:10:00.302 { 00:10:00.302 "name": "BaseBdev4", 00:10:00.302 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:00.302 "is_configured": true, 00:10:00.302 "data_offset": 0, 00:10:00.302 "data_size": 65536 00:10:00.302 } 00:10:00.302 ] 00:10:00.302 }' 00:10:00.302 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.302 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 [2024-11-20 17:02:24.609053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.868 BaseBdev1 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 [ 00:10:00.868 { 00:10:00.868 "name": "BaseBdev1", 00:10:00.868 "aliases": [ 00:10:00.868 "e7d58968-0bb5-46b5-8ee6-c803157a5c6a" 00:10:00.868 ], 00:10:00.868 "product_name": "Malloc disk", 00:10:00.868 "block_size": 512, 00:10:00.868 "num_blocks": 65536, 00:10:00.868 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:00.868 "assigned_rate_limits": { 00:10:00.868 "rw_ios_per_sec": 0, 00:10:00.868 "rw_mbytes_per_sec": 0, 00:10:00.868 "r_mbytes_per_sec": 0, 00:10:00.868 "w_mbytes_per_sec": 0 00:10:00.868 }, 00:10:00.868 "claimed": true, 00:10:00.868 "claim_type": "exclusive_write", 00:10:00.868 "zoned": false, 00:10:00.868 "supported_io_types": { 00:10:00.868 "read": true, 00:10:00.868 "write": true, 00:10:00.868 "unmap": true, 00:10:00.868 "flush": true, 00:10:00.868 "reset": true, 00:10:00.868 "nvme_admin": false, 00:10:00.868 "nvme_io": false, 00:10:00.868 "nvme_io_md": false, 00:10:00.868 "write_zeroes": true, 00:10:00.868 "zcopy": true, 00:10:00.868 "get_zone_info": false, 00:10:00.868 "zone_management": false, 00:10:00.868 "zone_append": false, 00:10:00.868 "compare": false, 00:10:00.868 "compare_and_write": false, 00:10:00.868 "abort": true, 00:10:00.868 "seek_hole": false, 00:10:00.868 "seek_data": false, 00:10:00.868 "copy": true, 00:10:00.868 "nvme_iov_md": false 00:10:00.868 }, 00:10:00.868 "memory_domains": [ 00:10:00.868 { 00:10:00.868 "dma_device_id": "system", 00:10:00.868 "dma_device_type": 1 00:10:00.868 }, 00:10:00.868 { 00:10:00.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.868 "dma_device_type": 2 00:10:00.868 } 00:10:00.868 ], 00:10:00.868 "driver_specific": {} 00:10:00.868 } 00:10:00.868 ] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.868 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.868 "name": "Existed_Raid", 00:10:00.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.868 "strip_size_kb": 64, 00:10:00.868 "state": "configuring", 00:10:00.868 "raid_level": "raid0", 00:10:00.868 "superblock": false, 00:10:00.868 "num_base_bdevs": 4, 00:10:00.868 "num_base_bdevs_discovered": 3, 00:10:00.868 "num_base_bdevs_operational": 4, 00:10:00.868 "base_bdevs_list": [ 00:10:00.868 { 00:10:00.868 "name": "BaseBdev1", 00:10:00.869 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:00.869 "is_configured": true, 00:10:00.869 "data_offset": 0, 00:10:00.869 "data_size": 65536 00:10:00.869 }, 00:10:00.869 { 00:10:00.869 "name": null, 00:10:00.869 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:00.869 "is_configured": false, 00:10:00.869 "data_offset": 0, 00:10:00.869 "data_size": 65536 00:10:00.869 }, 00:10:00.869 { 00:10:00.869 "name": "BaseBdev3", 00:10:00.869 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:00.869 "is_configured": true, 00:10:00.869 "data_offset": 0, 00:10:00.869 "data_size": 65536 00:10:00.869 }, 00:10:00.869 { 00:10:00.869 "name": "BaseBdev4", 00:10:00.869 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:00.869 "is_configured": true, 00:10:00.869 "data_offset": 0, 00:10:00.869 "data_size": 65536 00:10:00.869 } 00:10:00.869 ] 00:10:00.869 }' 00:10:00.869 17:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.869 17:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 [2024-11-20 17:02:25.201318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.435 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.435 "name": "Existed_Raid", 00:10:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.435 "strip_size_kb": 64, 00:10:01.435 "state": "configuring", 00:10:01.435 "raid_level": "raid0", 00:10:01.435 "superblock": false, 00:10:01.435 "num_base_bdevs": 4, 00:10:01.435 "num_base_bdevs_discovered": 2, 00:10:01.435 "num_base_bdevs_operational": 4, 00:10:01.435 "base_bdevs_list": [ 00:10:01.435 { 00:10:01.435 "name": "BaseBdev1", 00:10:01.435 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:01.436 "is_configured": true, 00:10:01.436 "data_offset": 0, 00:10:01.436 "data_size": 65536 00:10:01.436 }, 00:10:01.436 { 00:10:01.436 "name": null, 00:10:01.436 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:01.436 "is_configured": false, 00:10:01.436 "data_offset": 0, 00:10:01.436 "data_size": 65536 00:10:01.436 }, 00:10:01.436 { 00:10:01.436 "name": null, 00:10:01.436 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:01.436 "is_configured": false, 00:10:01.436 "data_offset": 0, 00:10:01.436 "data_size": 65536 00:10:01.436 }, 00:10:01.436 { 00:10:01.436 "name": "BaseBdev4", 00:10:01.436 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:01.436 "is_configured": true, 00:10:01.436 "data_offset": 0, 00:10:01.436 "data_size": 65536 00:10:01.436 } 00:10:01.436 ] 00:10:01.436 }' 00:10:01.436 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.436 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.002 [2024-11-20 17:02:25.805513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.002 "name": "Existed_Raid", 00:10:02.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.002 "strip_size_kb": 64, 00:10:02.002 "state": "configuring", 00:10:02.002 "raid_level": "raid0", 00:10:02.002 "superblock": false, 00:10:02.002 "num_base_bdevs": 4, 00:10:02.002 "num_base_bdevs_discovered": 3, 00:10:02.002 "num_base_bdevs_operational": 4, 00:10:02.002 "base_bdevs_list": [ 00:10:02.002 { 00:10:02.002 "name": "BaseBdev1", 00:10:02.002 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:02.002 "is_configured": true, 00:10:02.002 "data_offset": 0, 00:10:02.002 "data_size": 65536 00:10:02.002 }, 00:10:02.002 { 00:10:02.002 "name": null, 00:10:02.002 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:02.002 "is_configured": false, 00:10:02.002 "data_offset": 0, 00:10:02.002 "data_size": 65536 00:10:02.002 }, 00:10:02.002 { 00:10:02.002 "name": "BaseBdev3", 00:10:02.002 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:02.002 "is_configured": true, 00:10:02.002 "data_offset": 0, 00:10:02.002 "data_size": 65536 00:10:02.002 }, 00:10:02.002 { 00:10:02.002 "name": "BaseBdev4", 00:10:02.002 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:02.002 "is_configured": true, 00:10:02.002 "data_offset": 0, 00:10:02.002 "data_size": 65536 00:10:02.002 } 00:10:02.002 ] 00:10:02.002 }' 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.002 17:02:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.569 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.570 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.570 [2024-11-20 17:02:26.389819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.828 "name": "Existed_Raid", 00:10:02.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.828 "strip_size_kb": 64, 00:10:02.828 "state": "configuring", 00:10:02.828 "raid_level": "raid0", 00:10:02.828 "superblock": false, 00:10:02.828 "num_base_bdevs": 4, 00:10:02.828 "num_base_bdevs_discovered": 2, 00:10:02.828 "num_base_bdevs_operational": 4, 00:10:02.828 "base_bdevs_list": [ 00:10:02.828 { 00:10:02.828 "name": null, 00:10:02.828 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:02.828 "is_configured": false, 00:10:02.828 "data_offset": 0, 00:10:02.828 "data_size": 65536 00:10:02.828 }, 00:10:02.828 { 00:10:02.828 "name": null, 00:10:02.828 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:02.828 "is_configured": false, 00:10:02.828 "data_offset": 0, 00:10:02.828 "data_size": 65536 00:10:02.828 }, 00:10:02.828 { 00:10:02.828 "name": "BaseBdev3", 00:10:02.828 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:02.828 "is_configured": true, 00:10:02.828 "data_offset": 0, 00:10:02.828 "data_size": 65536 00:10:02.828 }, 00:10:02.828 { 00:10:02.828 "name": "BaseBdev4", 00:10:02.828 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:02.828 "is_configured": true, 00:10:02.828 "data_offset": 0, 00:10:02.828 "data_size": 65536 00:10:02.828 } 00:10:02.828 ] 00:10:02.828 }' 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.828 17:02:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.397 17:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.397 [2024-11-20 17:02:27.058685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.397 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.397 "name": "Existed_Raid", 00:10:03.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.397 "strip_size_kb": 64, 00:10:03.397 "state": "configuring", 00:10:03.397 "raid_level": "raid0", 00:10:03.397 "superblock": false, 00:10:03.398 "num_base_bdevs": 4, 00:10:03.398 "num_base_bdevs_discovered": 3, 00:10:03.398 "num_base_bdevs_operational": 4, 00:10:03.398 "base_bdevs_list": [ 00:10:03.398 { 00:10:03.398 "name": null, 00:10:03.398 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:03.398 "is_configured": false, 00:10:03.398 "data_offset": 0, 00:10:03.398 "data_size": 65536 00:10:03.398 }, 00:10:03.398 { 00:10:03.398 "name": "BaseBdev2", 00:10:03.398 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:03.398 "is_configured": true, 00:10:03.398 "data_offset": 0, 00:10:03.398 "data_size": 65536 00:10:03.398 }, 00:10:03.398 { 00:10:03.398 "name": "BaseBdev3", 00:10:03.398 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:03.398 "is_configured": true, 00:10:03.398 "data_offset": 0, 00:10:03.398 "data_size": 65536 00:10:03.398 }, 00:10:03.398 { 00:10:03.398 "name": "BaseBdev4", 00:10:03.398 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:03.398 "is_configured": true, 00:10:03.398 "data_offset": 0, 00:10:03.398 "data_size": 65536 00:10:03.398 } 00:10:03.398 ] 00:10:03.398 }' 00:10:03.398 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.398 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e7d58968-0bb5-46b5-8ee6-c803157a5c6a 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 [2024-11-20 17:02:27.725352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.966 [2024-11-20 17:02:27.725412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.966 [2024-11-20 17:02:27.725424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.966 [2024-11-20 17:02:27.725755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:03.966 [2024-11-20 17:02:27.725986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.966 [2024-11-20 17:02:27.726007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:03.966 [2024-11-20 17:02:27.726286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.966 NewBaseBdev 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.966 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.966 [ 00:10:03.966 { 00:10:03.966 "name": "NewBaseBdev", 00:10:03.966 "aliases": [ 00:10:03.966 "e7d58968-0bb5-46b5-8ee6-c803157a5c6a" 00:10:03.966 ], 00:10:03.966 "product_name": "Malloc disk", 00:10:03.966 "block_size": 512, 00:10:03.966 "num_blocks": 65536, 00:10:03.966 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:03.966 "assigned_rate_limits": { 00:10:03.966 "rw_ios_per_sec": 0, 00:10:03.966 "rw_mbytes_per_sec": 0, 00:10:03.966 "r_mbytes_per_sec": 0, 00:10:03.966 "w_mbytes_per_sec": 0 00:10:03.966 }, 00:10:03.966 "claimed": true, 00:10:03.966 "claim_type": "exclusive_write", 00:10:03.966 "zoned": false, 00:10:03.966 "supported_io_types": { 00:10:03.966 "read": true, 00:10:03.966 "write": true, 00:10:03.966 "unmap": true, 00:10:03.966 "flush": true, 00:10:03.966 "reset": true, 00:10:03.966 "nvme_admin": false, 00:10:03.966 "nvme_io": false, 00:10:03.966 "nvme_io_md": false, 00:10:03.966 "write_zeroes": true, 00:10:03.966 "zcopy": true, 00:10:03.966 "get_zone_info": false, 00:10:03.966 "zone_management": false, 00:10:03.966 "zone_append": false, 00:10:03.966 "compare": false, 00:10:03.967 "compare_and_write": false, 00:10:03.967 "abort": true, 00:10:03.967 "seek_hole": false, 00:10:03.967 "seek_data": false, 00:10:03.967 "copy": true, 00:10:03.967 "nvme_iov_md": false 00:10:03.967 }, 00:10:03.967 "memory_domains": [ 00:10:03.967 { 00:10:03.967 "dma_device_id": "system", 00:10:03.967 "dma_device_type": 1 00:10:03.967 }, 00:10:03.967 { 00:10:03.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.967 "dma_device_type": 2 00:10:03.967 } 00:10:03.967 ], 00:10:03.967 "driver_specific": {} 00:10:03.967 } 00:10:03.967 ] 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.967 "name": "Existed_Raid", 00:10:03.967 "uuid": "5944513c-b07b-41f7-b2b2-647e135bfcdf", 00:10:03.967 "strip_size_kb": 64, 00:10:03.967 "state": "online", 00:10:03.967 "raid_level": "raid0", 00:10:03.967 "superblock": false, 00:10:03.967 "num_base_bdevs": 4, 00:10:03.967 "num_base_bdevs_discovered": 4, 00:10:03.967 "num_base_bdevs_operational": 4, 00:10:03.967 "base_bdevs_list": [ 00:10:03.967 { 00:10:03.967 "name": "NewBaseBdev", 00:10:03.967 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:03.967 "is_configured": true, 00:10:03.967 "data_offset": 0, 00:10:03.967 "data_size": 65536 00:10:03.967 }, 00:10:03.967 { 00:10:03.967 "name": "BaseBdev2", 00:10:03.967 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:03.967 "is_configured": true, 00:10:03.967 "data_offset": 0, 00:10:03.967 "data_size": 65536 00:10:03.967 }, 00:10:03.967 { 00:10:03.967 "name": "BaseBdev3", 00:10:03.967 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:03.967 "is_configured": true, 00:10:03.967 "data_offset": 0, 00:10:03.967 "data_size": 65536 00:10:03.967 }, 00:10:03.967 { 00:10:03.967 "name": "BaseBdev4", 00:10:03.967 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:03.967 "is_configured": true, 00:10:03.967 "data_offset": 0, 00:10:03.967 "data_size": 65536 00:10:03.967 } 00:10:03.967 ] 00:10:03.967 }' 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.967 17:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.535 [2024-11-20 17:02:28.270061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.535 "name": "Existed_Raid", 00:10:04.535 "aliases": [ 00:10:04.535 "5944513c-b07b-41f7-b2b2-647e135bfcdf" 00:10:04.535 ], 00:10:04.535 "product_name": "Raid Volume", 00:10:04.535 "block_size": 512, 00:10:04.535 "num_blocks": 262144, 00:10:04.535 "uuid": "5944513c-b07b-41f7-b2b2-647e135bfcdf", 00:10:04.535 "assigned_rate_limits": { 00:10:04.535 "rw_ios_per_sec": 0, 00:10:04.535 "rw_mbytes_per_sec": 0, 00:10:04.535 "r_mbytes_per_sec": 0, 00:10:04.535 "w_mbytes_per_sec": 0 00:10:04.535 }, 00:10:04.535 "claimed": false, 00:10:04.535 "zoned": false, 00:10:04.535 "supported_io_types": { 00:10:04.535 "read": true, 00:10:04.535 "write": true, 00:10:04.535 "unmap": true, 00:10:04.535 "flush": true, 00:10:04.535 "reset": true, 00:10:04.535 "nvme_admin": false, 00:10:04.535 "nvme_io": false, 00:10:04.535 "nvme_io_md": false, 00:10:04.535 "write_zeroes": true, 00:10:04.535 "zcopy": false, 00:10:04.535 "get_zone_info": false, 00:10:04.535 "zone_management": false, 00:10:04.535 "zone_append": false, 00:10:04.535 "compare": false, 00:10:04.535 "compare_and_write": false, 00:10:04.535 "abort": false, 00:10:04.535 "seek_hole": false, 00:10:04.535 "seek_data": false, 00:10:04.535 "copy": false, 00:10:04.535 "nvme_iov_md": false 00:10:04.535 }, 00:10:04.535 "memory_domains": [ 00:10:04.535 { 00:10:04.535 "dma_device_id": "system", 00:10:04.535 "dma_device_type": 1 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.535 "dma_device_type": 2 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "system", 00:10:04.535 "dma_device_type": 1 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.535 "dma_device_type": 2 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "system", 00:10:04.535 "dma_device_type": 1 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.535 "dma_device_type": 2 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "system", 00:10:04.535 "dma_device_type": 1 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.535 "dma_device_type": 2 00:10:04.535 } 00:10:04.535 ], 00:10:04.535 "driver_specific": { 00:10:04.535 "raid": { 00:10:04.535 "uuid": "5944513c-b07b-41f7-b2b2-647e135bfcdf", 00:10:04.535 "strip_size_kb": 64, 00:10:04.535 "state": "online", 00:10:04.535 "raid_level": "raid0", 00:10:04.535 "superblock": false, 00:10:04.535 "num_base_bdevs": 4, 00:10:04.535 "num_base_bdevs_discovered": 4, 00:10:04.535 "num_base_bdevs_operational": 4, 00:10:04.535 "base_bdevs_list": [ 00:10:04.535 { 00:10:04.535 "name": "NewBaseBdev", 00:10:04.535 "uuid": "e7d58968-0bb5-46b5-8ee6-c803157a5c6a", 00:10:04.535 "is_configured": true, 00:10:04.535 "data_offset": 0, 00:10:04.535 "data_size": 65536 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "name": "BaseBdev2", 00:10:04.535 "uuid": "d65e990d-1350-4915-8774-330f7c04344f", 00:10:04.535 "is_configured": true, 00:10:04.535 "data_offset": 0, 00:10:04.535 "data_size": 65536 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "name": "BaseBdev3", 00:10:04.535 "uuid": "506ee4c8-71f0-4284-90c0-d307075fe3a1", 00:10:04.535 "is_configured": true, 00:10:04.535 "data_offset": 0, 00:10:04.535 "data_size": 65536 00:10:04.535 }, 00:10:04.535 { 00:10:04.535 "name": "BaseBdev4", 00:10:04.535 "uuid": "c6f2aab2-42fe-4092-abdb-afe0a291b570", 00:10:04.535 "is_configured": true, 00:10:04.535 "data_offset": 0, 00:10:04.535 "data_size": 65536 00:10:04.535 } 00:10:04.535 ] 00:10:04.535 } 00:10:04.535 } 00:10:04.535 }' 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.535 BaseBdev2 00:10:04.535 BaseBdev3 00:10:04.535 BaseBdev4' 00:10:04.535 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.794 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.795 [2024-11-20 17:02:28.649713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.795 [2024-11-20 17:02:28.649991] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.795 [2024-11-20 17:02:28.650138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.795 [2024-11-20 17:02:28.650264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.795 [2024-11-20 17:02:28.650278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69280 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69280 ']' 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69280 00:10:04.795 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69280 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69280' 00:10:05.054 killing process with pid 69280 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69280 00:10:05.054 [2024-11-20 17:02:28.691486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.054 17:02:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69280 00:10:05.314 [2024-11-20 17:02:29.061041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.692 00:10:06.692 real 0m12.827s 00:10:06.692 user 0m21.323s 00:10:06.692 sys 0m1.721s 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.692 ************************************ 00:10:06.692 END TEST raid_state_function_test 00:10:06.692 ************************************ 00:10:06.692 17:02:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:06.692 17:02:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.692 17:02:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.692 17:02:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.692 ************************************ 00:10:06.692 START TEST raid_state_function_test_sb 00:10:06.692 ************************************ 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:06.692 Process raid pid: 69963 00:10:06.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69963 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69963' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69963 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69963 ']' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.692 17:02:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.692 [2024-11-20 17:02:30.305050] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:06.692 [2024-11-20 17:02:30.305397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.692 [2024-11-20 17:02:30.484432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.951 [2024-11-20 17:02:30.623671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.210 [2024-11-20 17:02:30.840345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.210 [2024-11-20 17:02:30.840591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.777 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.777 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:07.777 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.777 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.777 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.777 [2024-11-20 17:02:31.356140] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.777 [2024-11-20 17:02:31.356211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.777 [2024-11-20 17:02:31.356232] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.777 [2024-11-20 17:02:31.356247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.777 [2024-11-20 17:02:31.356256] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.777 [2024-11-20 17:02:31.356269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.778 [2024-11-20 17:02:31.356278] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.778 [2024-11-20 17:02:31.356290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.778 "name": "Existed_Raid", 00:10:07.778 "uuid": "80dcb1e1-f773-478b-b616-704a308cd1b8", 00:10:07.778 "strip_size_kb": 64, 00:10:07.778 "state": "configuring", 00:10:07.778 "raid_level": "raid0", 00:10:07.778 "superblock": true, 00:10:07.778 "num_base_bdevs": 4, 00:10:07.778 "num_base_bdevs_discovered": 0, 00:10:07.778 "num_base_bdevs_operational": 4, 00:10:07.778 "base_bdevs_list": [ 00:10:07.778 { 00:10:07.778 "name": "BaseBdev1", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev2", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev3", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 }, 00:10:07.778 { 00:10:07.778 "name": "BaseBdev4", 00:10:07.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.778 "is_configured": false, 00:10:07.778 "data_offset": 0, 00:10:07.778 "data_size": 0 00:10:07.778 } 00:10:07.778 ] 00:10:07.778 }' 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.778 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.036 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.036 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.036 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.036 [2024-11-20 17:02:31.900246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.296 [2024-11-20 17:02:31.900419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 [2024-11-20 17:02:31.912249] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.296 [2024-11-20 17:02:31.912414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.296 [2024-11-20 17:02:31.912544] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.296 [2024-11-20 17:02:31.912608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.296 [2024-11-20 17:02:31.912851] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.296 [2024-11-20 17:02:31.912922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.296 [2024-11-20 17:02:31.912966] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.296 [2024-11-20 17:02:31.913010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 [2024-11-20 17:02:31.959308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.296 BaseBdev1 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 [ 00:10:08.296 { 00:10:08.296 "name": "BaseBdev1", 00:10:08.296 "aliases": [ 00:10:08.296 "7364aa33-a714-46d6-9e4d-0c84c3da9025" 00:10:08.296 ], 00:10:08.296 "product_name": "Malloc disk", 00:10:08.296 "block_size": 512, 00:10:08.296 "num_blocks": 65536, 00:10:08.296 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:08.296 "assigned_rate_limits": { 00:10:08.296 "rw_ios_per_sec": 0, 00:10:08.296 "rw_mbytes_per_sec": 0, 00:10:08.296 "r_mbytes_per_sec": 0, 00:10:08.296 "w_mbytes_per_sec": 0 00:10:08.296 }, 00:10:08.296 "claimed": true, 00:10:08.296 "claim_type": "exclusive_write", 00:10:08.296 "zoned": false, 00:10:08.296 "supported_io_types": { 00:10:08.296 "read": true, 00:10:08.296 "write": true, 00:10:08.296 "unmap": true, 00:10:08.296 "flush": true, 00:10:08.296 "reset": true, 00:10:08.296 "nvme_admin": false, 00:10:08.296 "nvme_io": false, 00:10:08.296 "nvme_io_md": false, 00:10:08.296 "write_zeroes": true, 00:10:08.296 "zcopy": true, 00:10:08.296 "get_zone_info": false, 00:10:08.296 "zone_management": false, 00:10:08.296 "zone_append": false, 00:10:08.296 "compare": false, 00:10:08.296 "compare_and_write": false, 00:10:08.296 "abort": true, 00:10:08.296 "seek_hole": false, 00:10:08.296 "seek_data": false, 00:10:08.296 "copy": true, 00:10:08.296 "nvme_iov_md": false 00:10:08.296 }, 00:10:08.296 "memory_domains": [ 00:10:08.296 { 00:10:08.296 "dma_device_id": "system", 00:10:08.296 "dma_device_type": 1 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.296 "dma_device_type": 2 00:10:08.296 } 00:10:08.296 ], 00:10:08.296 "driver_specific": {} 00:10:08.296 } 00:10:08.296 ] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.296 17:02:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.296 "name": "Existed_Raid", 00:10:08.296 "uuid": "74ee3a55-d221-4df8-9fe7-62bb1f5618fe", 00:10:08.296 "strip_size_kb": 64, 00:10:08.296 "state": "configuring", 00:10:08.296 "raid_level": "raid0", 00:10:08.296 "superblock": true, 00:10:08.296 "num_base_bdevs": 4, 00:10:08.296 "num_base_bdevs_discovered": 1, 00:10:08.296 "num_base_bdevs_operational": 4, 00:10:08.296 "base_bdevs_list": [ 00:10:08.296 { 00:10:08.296 "name": "BaseBdev1", 00:10:08.296 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:08.296 "is_configured": true, 00:10:08.296 "data_offset": 2048, 00:10:08.296 "data_size": 63488 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "BaseBdev2", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.296 "is_configured": false, 00:10:08.296 "data_offset": 0, 00:10:08.296 "data_size": 0 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "BaseBdev3", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.296 "is_configured": false, 00:10:08.296 "data_offset": 0, 00:10:08.296 "data_size": 0 00:10:08.296 }, 00:10:08.296 { 00:10:08.296 "name": "BaseBdev4", 00:10:08.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.296 "is_configured": false, 00:10:08.296 "data_offset": 0, 00:10:08.296 "data_size": 0 00:10:08.296 } 00:10:08.296 ] 00:10:08.296 }' 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.296 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.865 [2024-11-20 17:02:32.516744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.865 [2024-11-20 17:02:32.516847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.865 [2024-11-20 17:02:32.524869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.865 [2024-11-20 17:02:32.527525] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.865 [2024-11-20 17:02:32.527694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.865 [2024-11-20 17:02:32.527836] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.865 [2024-11-20 17:02:32.527999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.865 [2024-11-20 17:02:32.528139] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.865 [2024-11-20 17:02:32.528292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.865 "name": "Existed_Raid", 00:10:08.865 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:08.865 "strip_size_kb": 64, 00:10:08.865 "state": "configuring", 00:10:08.865 "raid_level": "raid0", 00:10:08.865 "superblock": true, 00:10:08.865 "num_base_bdevs": 4, 00:10:08.865 "num_base_bdevs_discovered": 1, 00:10:08.865 "num_base_bdevs_operational": 4, 00:10:08.865 "base_bdevs_list": [ 00:10:08.865 { 00:10:08.865 "name": "BaseBdev1", 00:10:08.865 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:08.865 "is_configured": true, 00:10:08.865 "data_offset": 2048, 00:10:08.865 "data_size": 63488 00:10:08.865 }, 00:10:08.865 { 00:10:08.865 "name": "BaseBdev2", 00:10:08.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.865 "is_configured": false, 00:10:08.865 "data_offset": 0, 00:10:08.865 "data_size": 0 00:10:08.865 }, 00:10:08.865 { 00:10:08.865 "name": "BaseBdev3", 00:10:08.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.865 "is_configured": false, 00:10:08.865 "data_offset": 0, 00:10:08.865 "data_size": 0 00:10:08.865 }, 00:10:08.865 { 00:10:08.865 "name": "BaseBdev4", 00:10:08.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.865 "is_configured": false, 00:10:08.865 "data_offset": 0, 00:10:08.865 "data_size": 0 00:10:08.865 } 00:10:08.865 ] 00:10:08.865 }' 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.865 17:02:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.433 [2024-11-20 17:02:33.067780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.433 BaseBdev2 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.433 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.434 [ 00:10:09.434 { 00:10:09.434 "name": "BaseBdev2", 00:10:09.434 "aliases": [ 00:10:09.434 "c8be0803-5081-4751-8de9-d1131256eddf" 00:10:09.434 ], 00:10:09.434 "product_name": "Malloc disk", 00:10:09.434 "block_size": 512, 00:10:09.434 "num_blocks": 65536, 00:10:09.434 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:09.434 "assigned_rate_limits": { 00:10:09.434 "rw_ios_per_sec": 0, 00:10:09.434 "rw_mbytes_per_sec": 0, 00:10:09.434 "r_mbytes_per_sec": 0, 00:10:09.434 "w_mbytes_per_sec": 0 00:10:09.434 }, 00:10:09.434 "claimed": true, 00:10:09.434 "claim_type": "exclusive_write", 00:10:09.434 "zoned": false, 00:10:09.434 "supported_io_types": { 00:10:09.434 "read": true, 00:10:09.434 "write": true, 00:10:09.434 "unmap": true, 00:10:09.434 "flush": true, 00:10:09.434 "reset": true, 00:10:09.434 "nvme_admin": false, 00:10:09.434 "nvme_io": false, 00:10:09.434 "nvme_io_md": false, 00:10:09.434 "write_zeroes": true, 00:10:09.434 "zcopy": true, 00:10:09.434 "get_zone_info": false, 00:10:09.434 "zone_management": false, 00:10:09.434 "zone_append": false, 00:10:09.434 "compare": false, 00:10:09.434 "compare_and_write": false, 00:10:09.434 "abort": true, 00:10:09.434 "seek_hole": false, 00:10:09.434 "seek_data": false, 00:10:09.434 "copy": true, 00:10:09.434 "nvme_iov_md": false 00:10:09.434 }, 00:10:09.434 "memory_domains": [ 00:10:09.434 { 00:10:09.434 "dma_device_id": "system", 00:10:09.434 "dma_device_type": 1 00:10:09.434 }, 00:10:09.434 { 00:10:09.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.434 "dma_device_type": 2 00:10:09.434 } 00:10:09.434 ], 00:10:09.434 "driver_specific": {} 00:10:09.434 } 00:10:09.434 ] 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.434 "name": "Existed_Raid", 00:10:09.434 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:09.434 "strip_size_kb": 64, 00:10:09.434 "state": "configuring", 00:10:09.434 "raid_level": "raid0", 00:10:09.434 "superblock": true, 00:10:09.434 "num_base_bdevs": 4, 00:10:09.434 "num_base_bdevs_discovered": 2, 00:10:09.434 "num_base_bdevs_operational": 4, 00:10:09.434 "base_bdevs_list": [ 00:10:09.434 { 00:10:09.434 "name": "BaseBdev1", 00:10:09.434 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:09.434 "is_configured": true, 00:10:09.434 "data_offset": 2048, 00:10:09.434 "data_size": 63488 00:10:09.434 }, 00:10:09.434 { 00:10:09.434 "name": "BaseBdev2", 00:10:09.434 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:09.434 "is_configured": true, 00:10:09.434 "data_offset": 2048, 00:10:09.434 "data_size": 63488 00:10:09.434 }, 00:10:09.434 { 00:10:09.434 "name": "BaseBdev3", 00:10:09.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.434 "is_configured": false, 00:10:09.434 "data_offset": 0, 00:10:09.434 "data_size": 0 00:10:09.434 }, 00:10:09.434 { 00:10:09.434 "name": "BaseBdev4", 00:10:09.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.434 "is_configured": false, 00:10:09.434 "data_offset": 0, 00:10:09.434 "data_size": 0 00:10:09.434 } 00:10:09.434 ] 00:10:09.434 }' 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.434 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 [2024-11-20 17:02:33.653589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.029 BaseBdev3 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 [ 00:10:10.029 { 00:10:10.029 "name": "BaseBdev3", 00:10:10.029 "aliases": [ 00:10:10.029 "d8e328f7-ff7e-4d13-b492-c59cb8f406d9" 00:10:10.029 ], 00:10:10.029 "product_name": "Malloc disk", 00:10:10.029 "block_size": 512, 00:10:10.029 "num_blocks": 65536, 00:10:10.029 "uuid": "d8e328f7-ff7e-4d13-b492-c59cb8f406d9", 00:10:10.029 "assigned_rate_limits": { 00:10:10.029 "rw_ios_per_sec": 0, 00:10:10.029 "rw_mbytes_per_sec": 0, 00:10:10.029 "r_mbytes_per_sec": 0, 00:10:10.029 "w_mbytes_per_sec": 0 00:10:10.029 }, 00:10:10.029 "claimed": true, 00:10:10.029 "claim_type": "exclusive_write", 00:10:10.029 "zoned": false, 00:10:10.029 "supported_io_types": { 00:10:10.029 "read": true, 00:10:10.029 "write": true, 00:10:10.029 "unmap": true, 00:10:10.029 "flush": true, 00:10:10.029 "reset": true, 00:10:10.029 "nvme_admin": false, 00:10:10.029 "nvme_io": false, 00:10:10.029 "nvme_io_md": false, 00:10:10.029 "write_zeroes": true, 00:10:10.029 "zcopy": true, 00:10:10.029 "get_zone_info": false, 00:10:10.029 "zone_management": false, 00:10:10.029 "zone_append": false, 00:10:10.029 "compare": false, 00:10:10.029 "compare_and_write": false, 00:10:10.029 "abort": true, 00:10:10.029 "seek_hole": false, 00:10:10.029 "seek_data": false, 00:10:10.029 "copy": true, 00:10:10.029 "nvme_iov_md": false 00:10:10.029 }, 00:10:10.029 "memory_domains": [ 00:10:10.029 { 00:10:10.029 "dma_device_id": "system", 00:10:10.029 "dma_device_type": 1 00:10:10.029 }, 00:10:10.029 { 00:10:10.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.029 "dma_device_type": 2 00:10:10.029 } 00:10:10.029 ], 00:10:10.029 "driver_specific": {} 00:10:10.029 } 00:10:10.029 ] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.029 "name": "Existed_Raid", 00:10:10.029 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:10.029 "strip_size_kb": 64, 00:10:10.029 "state": "configuring", 00:10:10.029 "raid_level": "raid0", 00:10:10.029 "superblock": true, 00:10:10.029 "num_base_bdevs": 4, 00:10:10.029 "num_base_bdevs_discovered": 3, 00:10:10.029 "num_base_bdevs_operational": 4, 00:10:10.029 "base_bdevs_list": [ 00:10:10.029 { 00:10:10.029 "name": "BaseBdev1", 00:10:10.029 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:10.029 "is_configured": true, 00:10:10.029 "data_offset": 2048, 00:10:10.029 "data_size": 63488 00:10:10.029 }, 00:10:10.029 { 00:10:10.029 "name": "BaseBdev2", 00:10:10.029 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:10.029 "is_configured": true, 00:10:10.029 "data_offset": 2048, 00:10:10.029 "data_size": 63488 00:10:10.029 }, 00:10:10.029 { 00:10:10.029 "name": "BaseBdev3", 00:10:10.029 "uuid": "d8e328f7-ff7e-4d13-b492-c59cb8f406d9", 00:10:10.029 "is_configured": true, 00:10:10.029 "data_offset": 2048, 00:10:10.029 "data_size": 63488 00:10:10.029 }, 00:10:10.029 { 00:10:10.029 "name": "BaseBdev4", 00:10:10.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.029 "is_configured": false, 00:10:10.029 "data_offset": 0, 00:10:10.029 "data_size": 0 00:10:10.029 } 00:10:10.029 ] 00:10:10.029 }' 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.029 17:02:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 [2024-11-20 17:02:34.257364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.597 [2024-11-20 17:02:34.257733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:10.597 [2024-11-20 17:02:34.257761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.597 BaseBdev4 00:10:10.597 [2024-11-20 17:02:34.258121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:10.597 [2024-11-20 17:02:34.258322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:10.597 [2024-11-20 17:02:34.258342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:10.597 [2024-11-20 17:02:34.258517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.597 [ 00:10:10.597 { 00:10:10.597 "name": "BaseBdev4", 00:10:10.597 "aliases": [ 00:10:10.597 "e75d92ce-b950-4599-88ba-e48d71d0687c" 00:10:10.597 ], 00:10:10.597 "product_name": "Malloc disk", 00:10:10.597 "block_size": 512, 00:10:10.597 "num_blocks": 65536, 00:10:10.597 "uuid": "e75d92ce-b950-4599-88ba-e48d71d0687c", 00:10:10.597 "assigned_rate_limits": { 00:10:10.597 "rw_ios_per_sec": 0, 00:10:10.597 "rw_mbytes_per_sec": 0, 00:10:10.597 "r_mbytes_per_sec": 0, 00:10:10.597 "w_mbytes_per_sec": 0 00:10:10.597 }, 00:10:10.597 "claimed": true, 00:10:10.597 "claim_type": "exclusive_write", 00:10:10.597 "zoned": false, 00:10:10.597 "supported_io_types": { 00:10:10.597 "read": true, 00:10:10.597 "write": true, 00:10:10.597 "unmap": true, 00:10:10.597 "flush": true, 00:10:10.597 "reset": true, 00:10:10.597 "nvme_admin": false, 00:10:10.597 "nvme_io": false, 00:10:10.597 "nvme_io_md": false, 00:10:10.597 "write_zeroes": true, 00:10:10.597 "zcopy": true, 00:10:10.597 "get_zone_info": false, 00:10:10.597 "zone_management": false, 00:10:10.597 "zone_append": false, 00:10:10.597 "compare": false, 00:10:10.597 "compare_and_write": false, 00:10:10.597 "abort": true, 00:10:10.597 "seek_hole": false, 00:10:10.597 "seek_data": false, 00:10:10.597 "copy": true, 00:10:10.597 "nvme_iov_md": false 00:10:10.597 }, 00:10:10.597 "memory_domains": [ 00:10:10.597 { 00:10:10.597 "dma_device_id": "system", 00:10:10.597 "dma_device_type": 1 00:10:10.597 }, 00:10:10.597 { 00:10:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.597 "dma_device_type": 2 00:10:10.597 } 00:10:10.597 ], 00:10:10.597 "driver_specific": {} 00:10:10.597 } 00:10:10.597 ] 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.597 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.598 "name": "Existed_Raid", 00:10:10.598 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:10.598 "strip_size_kb": 64, 00:10:10.598 "state": "online", 00:10:10.598 "raid_level": "raid0", 00:10:10.598 "superblock": true, 00:10:10.598 "num_base_bdevs": 4, 00:10:10.598 "num_base_bdevs_discovered": 4, 00:10:10.598 "num_base_bdevs_operational": 4, 00:10:10.598 "base_bdevs_list": [ 00:10:10.598 { 00:10:10.598 "name": "BaseBdev1", 00:10:10.598 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:10.598 "is_configured": true, 00:10:10.598 "data_offset": 2048, 00:10:10.598 "data_size": 63488 00:10:10.598 }, 00:10:10.598 { 00:10:10.598 "name": "BaseBdev2", 00:10:10.598 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:10.598 "is_configured": true, 00:10:10.598 "data_offset": 2048, 00:10:10.598 "data_size": 63488 00:10:10.598 }, 00:10:10.598 { 00:10:10.598 "name": "BaseBdev3", 00:10:10.598 "uuid": "d8e328f7-ff7e-4d13-b492-c59cb8f406d9", 00:10:10.598 "is_configured": true, 00:10:10.598 "data_offset": 2048, 00:10:10.598 "data_size": 63488 00:10:10.598 }, 00:10:10.598 { 00:10:10.598 "name": "BaseBdev4", 00:10:10.598 "uuid": "e75d92ce-b950-4599-88ba-e48d71d0687c", 00:10:10.598 "is_configured": true, 00:10:10.598 "data_offset": 2048, 00:10:10.598 "data_size": 63488 00:10:10.598 } 00:10:10.598 ] 00:10:10.598 }' 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.598 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-11-20 17:02:34.830110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.166 "name": "Existed_Raid", 00:10:11.166 "aliases": [ 00:10:11.166 "0e1af4ed-929e-4b31-9274-2d5c90f23ba8" 00:10:11.166 ], 00:10:11.166 "product_name": "Raid Volume", 00:10:11.166 "block_size": 512, 00:10:11.166 "num_blocks": 253952, 00:10:11.166 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:11.166 "assigned_rate_limits": { 00:10:11.166 "rw_ios_per_sec": 0, 00:10:11.166 "rw_mbytes_per_sec": 0, 00:10:11.166 "r_mbytes_per_sec": 0, 00:10:11.166 "w_mbytes_per_sec": 0 00:10:11.166 }, 00:10:11.166 "claimed": false, 00:10:11.166 "zoned": false, 00:10:11.166 "supported_io_types": { 00:10:11.166 "read": true, 00:10:11.166 "write": true, 00:10:11.166 "unmap": true, 00:10:11.166 "flush": true, 00:10:11.166 "reset": true, 00:10:11.166 "nvme_admin": false, 00:10:11.166 "nvme_io": false, 00:10:11.166 "nvme_io_md": false, 00:10:11.166 "write_zeroes": true, 00:10:11.166 "zcopy": false, 00:10:11.166 "get_zone_info": false, 00:10:11.166 "zone_management": false, 00:10:11.166 "zone_append": false, 00:10:11.166 "compare": false, 00:10:11.166 "compare_and_write": false, 00:10:11.166 "abort": false, 00:10:11.166 "seek_hole": false, 00:10:11.166 "seek_data": false, 00:10:11.166 "copy": false, 00:10:11.166 "nvme_iov_md": false 00:10:11.166 }, 00:10:11.166 "memory_domains": [ 00:10:11.166 { 00:10:11.166 "dma_device_id": "system", 00:10:11.166 "dma_device_type": 1 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.166 "dma_device_type": 2 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "system", 00:10:11.166 "dma_device_type": 1 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.166 "dma_device_type": 2 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "system", 00:10:11.166 "dma_device_type": 1 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.166 "dma_device_type": 2 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "system", 00:10:11.166 "dma_device_type": 1 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.166 "dma_device_type": 2 00:10:11.166 } 00:10:11.166 ], 00:10:11.166 "driver_specific": { 00:10:11.166 "raid": { 00:10:11.166 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:11.166 "strip_size_kb": 64, 00:10:11.166 "state": "online", 00:10:11.166 "raid_level": "raid0", 00:10:11.166 "superblock": true, 00:10:11.166 "num_base_bdevs": 4, 00:10:11.166 "num_base_bdevs_discovered": 4, 00:10:11.166 "num_base_bdevs_operational": 4, 00:10:11.166 "base_bdevs_list": [ 00:10:11.166 { 00:10:11.166 "name": "BaseBdev1", 00:10:11.166 "uuid": "7364aa33-a714-46d6-9e4d-0c84c3da9025", 00:10:11.166 "is_configured": true, 00:10:11.166 "data_offset": 2048, 00:10:11.166 "data_size": 63488 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "name": "BaseBdev2", 00:10:11.166 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:11.166 "is_configured": true, 00:10:11.166 "data_offset": 2048, 00:10:11.166 "data_size": 63488 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "name": "BaseBdev3", 00:10:11.166 "uuid": "d8e328f7-ff7e-4d13-b492-c59cb8f406d9", 00:10:11.166 "is_configured": true, 00:10:11.166 "data_offset": 2048, 00:10:11.166 "data_size": 63488 00:10:11.166 }, 00:10:11.166 { 00:10:11.166 "name": "BaseBdev4", 00:10:11.166 "uuid": "e75d92ce-b950-4599-88ba-e48d71d0687c", 00:10:11.166 "is_configured": true, 00:10:11.166 "data_offset": 2048, 00:10:11.166 "data_size": 63488 00:10:11.166 } 00:10:11.166 ] 00:10:11.166 } 00:10:11.166 } 00:10:11.166 }' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.166 BaseBdev2 00:10:11.166 BaseBdev3 00:10:11.166 BaseBdev4' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 17:02:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.166 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.426 [2024-11-20 17:02:35.193749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.426 [2024-11-20 17:02:35.193815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.426 [2024-11-20 17:02:35.193896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.426 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.685 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.685 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.685 "name": "Existed_Raid", 00:10:11.685 "uuid": "0e1af4ed-929e-4b31-9274-2d5c90f23ba8", 00:10:11.685 "strip_size_kb": 64, 00:10:11.685 "state": "offline", 00:10:11.685 "raid_level": "raid0", 00:10:11.685 "superblock": true, 00:10:11.685 "num_base_bdevs": 4, 00:10:11.685 "num_base_bdevs_discovered": 3, 00:10:11.685 "num_base_bdevs_operational": 3, 00:10:11.685 "base_bdevs_list": [ 00:10:11.685 { 00:10:11.685 "name": null, 00:10:11.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.685 "is_configured": false, 00:10:11.685 "data_offset": 0, 00:10:11.685 "data_size": 63488 00:10:11.685 }, 00:10:11.685 { 00:10:11.685 "name": "BaseBdev2", 00:10:11.685 "uuid": "c8be0803-5081-4751-8de9-d1131256eddf", 00:10:11.685 "is_configured": true, 00:10:11.685 "data_offset": 2048, 00:10:11.685 "data_size": 63488 00:10:11.685 }, 00:10:11.685 { 00:10:11.685 "name": "BaseBdev3", 00:10:11.685 "uuid": "d8e328f7-ff7e-4d13-b492-c59cb8f406d9", 00:10:11.685 "is_configured": true, 00:10:11.685 "data_offset": 2048, 00:10:11.685 "data_size": 63488 00:10:11.685 }, 00:10:11.685 { 00:10:11.685 "name": "BaseBdev4", 00:10:11.685 "uuid": "e75d92ce-b950-4599-88ba-e48d71d0687c", 00:10:11.685 "is_configured": true, 00:10:11.685 "data_offset": 2048, 00:10:11.685 "data_size": 63488 00:10:11.685 } 00:10:11.685 ] 00:10:11.685 }' 00:10:11.685 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.685 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.943 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.943 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.943 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.943 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.943 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.944 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.202 [2024-11-20 17:02:35.811747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.202 17:02:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.202 [2024-11-20 17:02:35.964411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.202 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.461 [2024-11-20 17:02:36.105032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:12.461 [2024-11-20 17:02:36.105089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.461 BaseBdev2 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.461 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.461 [ 00:10:12.461 { 00:10:12.461 "name": "BaseBdev2", 00:10:12.461 "aliases": [ 00:10:12.461 "014fd386-59cf-4dc6-8425-0e81b4800d19" 00:10:12.461 ], 00:10:12.461 "product_name": "Malloc disk", 00:10:12.461 "block_size": 512, 00:10:12.461 "num_blocks": 65536, 00:10:12.461 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:12.461 "assigned_rate_limits": { 00:10:12.461 "rw_ios_per_sec": 0, 00:10:12.461 "rw_mbytes_per_sec": 0, 00:10:12.461 "r_mbytes_per_sec": 0, 00:10:12.461 "w_mbytes_per_sec": 0 00:10:12.461 }, 00:10:12.461 "claimed": false, 00:10:12.461 "zoned": false, 00:10:12.461 "supported_io_types": { 00:10:12.461 "read": true, 00:10:12.461 "write": true, 00:10:12.461 "unmap": true, 00:10:12.461 "flush": true, 00:10:12.462 "reset": true, 00:10:12.462 "nvme_admin": false, 00:10:12.462 "nvme_io": false, 00:10:12.462 "nvme_io_md": false, 00:10:12.462 "write_zeroes": true, 00:10:12.462 "zcopy": true, 00:10:12.462 "get_zone_info": false, 00:10:12.462 "zone_management": false, 00:10:12.462 "zone_append": false, 00:10:12.462 "compare": false, 00:10:12.462 "compare_and_write": false, 00:10:12.462 "abort": true, 00:10:12.462 "seek_hole": false, 00:10:12.462 "seek_data": false, 00:10:12.462 "copy": true, 00:10:12.462 "nvme_iov_md": false 00:10:12.462 }, 00:10:12.462 "memory_domains": [ 00:10:12.462 { 00:10:12.462 "dma_device_id": "system", 00:10:12.462 "dma_device_type": 1 00:10:12.462 }, 00:10:12.462 { 00:10:12.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.462 "dma_device_type": 2 00:10:12.462 } 00:10:12.462 ], 00:10:12.462 "driver_specific": {} 00:10:12.462 } 00:10:12.462 ] 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.462 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.720 BaseBdev3 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.720 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.720 [ 00:10:12.720 { 00:10:12.720 "name": "BaseBdev3", 00:10:12.720 "aliases": [ 00:10:12.720 "0622333c-4e18-4ec4-bef4-9c05e80a4c5c" 00:10:12.720 ], 00:10:12.720 "product_name": "Malloc disk", 00:10:12.720 "block_size": 512, 00:10:12.720 "num_blocks": 65536, 00:10:12.720 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:12.720 "assigned_rate_limits": { 00:10:12.720 "rw_ios_per_sec": 0, 00:10:12.720 "rw_mbytes_per_sec": 0, 00:10:12.720 "r_mbytes_per_sec": 0, 00:10:12.720 "w_mbytes_per_sec": 0 00:10:12.720 }, 00:10:12.720 "claimed": false, 00:10:12.720 "zoned": false, 00:10:12.720 "supported_io_types": { 00:10:12.720 "read": true, 00:10:12.720 "write": true, 00:10:12.720 "unmap": true, 00:10:12.720 "flush": true, 00:10:12.720 "reset": true, 00:10:12.720 "nvme_admin": false, 00:10:12.720 "nvme_io": false, 00:10:12.720 "nvme_io_md": false, 00:10:12.720 "write_zeroes": true, 00:10:12.720 "zcopy": true, 00:10:12.720 "get_zone_info": false, 00:10:12.720 "zone_management": false, 00:10:12.721 "zone_append": false, 00:10:12.721 "compare": false, 00:10:12.721 "compare_and_write": false, 00:10:12.721 "abort": true, 00:10:12.721 "seek_hole": false, 00:10:12.721 "seek_data": false, 00:10:12.721 "copy": true, 00:10:12.721 "nvme_iov_md": false 00:10:12.721 }, 00:10:12.721 "memory_domains": [ 00:10:12.721 { 00:10:12.721 "dma_device_id": "system", 00:10:12.721 "dma_device_type": 1 00:10:12.721 }, 00:10:12.721 { 00:10:12.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.721 "dma_device_type": 2 00:10:12.721 } 00:10:12.721 ], 00:10:12.721 "driver_specific": {} 00:10:12.721 } 00:10:12.721 ] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.721 BaseBdev4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.721 [ 00:10:12.721 { 00:10:12.721 "name": "BaseBdev4", 00:10:12.721 "aliases": [ 00:10:12.721 "85cebc15-b7f9-44dc-ba5f-77d3330761e9" 00:10:12.721 ], 00:10:12.721 "product_name": "Malloc disk", 00:10:12.721 "block_size": 512, 00:10:12.721 "num_blocks": 65536, 00:10:12.721 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:12.721 "assigned_rate_limits": { 00:10:12.721 "rw_ios_per_sec": 0, 00:10:12.721 "rw_mbytes_per_sec": 0, 00:10:12.721 "r_mbytes_per_sec": 0, 00:10:12.721 "w_mbytes_per_sec": 0 00:10:12.721 }, 00:10:12.721 "claimed": false, 00:10:12.721 "zoned": false, 00:10:12.721 "supported_io_types": { 00:10:12.721 "read": true, 00:10:12.721 "write": true, 00:10:12.721 "unmap": true, 00:10:12.721 "flush": true, 00:10:12.721 "reset": true, 00:10:12.721 "nvme_admin": false, 00:10:12.721 "nvme_io": false, 00:10:12.721 "nvme_io_md": false, 00:10:12.721 "write_zeroes": true, 00:10:12.721 "zcopy": true, 00:10:12.721 "get_zone_info": false, 00:10:12.721 "zone_management": false, 00:10:12.721 "zone_append": false, 00:10:12.721 "compare": false, 00:10:12.721 "compare_and_write": false, 00:10:12.721 "abort": true, 00:10:12.721 "seek_hole": false, 00:10:12.721 "seek_data": false, 00:10:12.721 "copy": true, 00:10:12.721 "nvme_iov_md": false 00:10:12.721 }, 00:10:12.721 "memory_domains": [ 00:10:12.721 { 00:10:12.721 "dma_device_id": "system", 00:10:12.721 "dma_device_type": 1 00:10:12.721 }, 00:10:12.721 { 00:10:12.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.721 "dma_device_type": 2 00:10:12.721 } 00:10:12.721 ], 00:10:12.721 "driver_specific": {} 00:10:12.721 } 00:10:12.721 ] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.721 [2024-11-20 17:02:36.486723] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.721 [2024-11-20 17:02:36.486820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.721 [2024-11-20 17:02:36.486854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.721 [2024-11-20 17:02:36.489283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.721 [2024-11-20 17:02:36.489348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.721 "name": "Existed_Raid", 00:10:12.721 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:12.721 "strip_size_kb": 64, 00:10:12.721 "state": "configuring", 00:10:12.721 "raid_level": "raid0", 00:10:12.721 "superblock": true, 00:10:12.721 "num_base_bdevs": 4, 00:10:12.721 "num_base_bdevs_discovered": 3, 00:10:12.721 "num_base_bdevs_operational": 4, 00:10:12.721 "base_bdevs_list": [ 00:10:12.721 { 00:10:12.721 "name": "BaseBdev1", 00:10:12.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.721 "is_configured": false, 00:10:12.721 "data_offset": 0, 00:10:12.721 "data_size": 0 00:10:12.721 }, 00:10:12.721 { 00:10:12.721 "name": "BaseBdev2", 00:10:12.721 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:12.721 "is_configured": true, 00:10:12.721 "data_offset": 2048, 00:10:12.721 "data_size": 63488 00:10:12.721 }, 00:10:12.721 { 00:10:12.721 "name": "BaseBdev3", 00:10:12.721 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:12.721 "is_configured": true, 00:10:12.721 "data_offset": 2048, 00:10:12.721 "data_size": 63488 00:10:12.721 }, 00:10:12.721 { 00:10:12.721 "name": "BaseBdev4", 00:10:12.721 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:12.721 "is_configured": true, 00:10:12.721 "data_offset": 2048, 00:10:12.721 "data_size": 63488 00:10:12.721 } 00:10:12.721 ] 00:10:12.721 }' 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.721 17:02:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.287 [2024-11-20 17:02:37.014857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.287 "name": "Existed_Raid", 00:10:13.287 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:13.287 "strip_size_kb": 64, 00:10:13.287 "state": "configuring", 00:10:13.287 "raid_level": "raid0", 00:10:13.287 "superblock": true, 00:10:13.287 "num_base_bdevs": 4, 00:10:13.287 "num_base_bdevs_discovered": 2, 00:10:13.287 "num_base_bdevs_operational": 4, 00:10:13.287 "base_bdevs_list": [ 00:10:13.287 { 00:10:13.287 "name": "BaseBdev1", 00:10:13.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.287 "is_configured": false, 00:10:13.287 "data_offset": 0, 00:10:13.287 "data_size": 0 00:10:13.287 }, 00:10:13.287 { 00:10:13.287 "name": null, 00:10:13.287 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:13.287 "is_configured": false, 00:10:13.287 "data_offset": 0, 00:10:13.287 "data_size": 63488 00:10:13.287 }, 00:10:13.287 { 00:10:13.287 "name": "BaseBdev3", 00:10:13.287 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:13.287 "is_configured": true, 00:10:13.287 "data_offset": 2048, 00:10:13.287 "data_size": 63488 00:10:13.287 }, 00:10:13.287 { 00:10:13.287 "name": "BaseBdev4", 00:10:13.287 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:13.287 "is_configured": true, 00:10:13.287 "data_offset": 2048, 00:10:13.287 "data_size": 63488 00:10:13.287 } 00:10:13.287 ] 00:10:13.287 }' 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.287 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 [2024-11-20 17:02:37.631981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.853 BaseBdev1 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 [ 00:10:13.853 { 00:10:13.853 "name": "BaseBdev1", 00:10:13.853 "aliases": [ 00:10:13.853 "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350" 00:10:13.853 ], 00:10:13.853 "product_name": "Malloc disk", 00:10:13.853 "block_size": 512, 00:10:13.853 "num_blocks": 65536, 00:10:13.853 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:13.853 "assigned_rate_limits": { 00:10:13.853 "rw_ios_per_sec": 0, 00:10:13.853 "rw_mbytes_per_sec": 0, 00:10:13.853 "r_mbytes_per_sec": 0, 00:10:13.853 "w_mbytes_per_sec": 0 00:10:13.853 }, 00:10:13.853 "claimed": true, 00:10:13.853 "claim_type": "exclusive_write", 00:10:13.853 "zoned": false, 00:10:13.853 "supported_io_types": { 00:10:13.853 "read": true, 00:10:13.853 "write": true, 00:10:13.853 "unmap": true, 00:10:13.853 "flush": true, 00:10:13.853 "reset": true, 00:10:13.853 "nvme_admin": false, 00:10:13.853 "nvme_io": false, 00:10:13.853 "nvme_io_md": false, 00:10:13.853 "write_zeroes": true, 00:10:13.853 "zcopy": true, 00:10:13.853 "get_zone_info": false, 00:10:13.853 "zone_management": false, 00:10:13.853 "zone_append": false, 00:10:13.853 "compare": false, 00:10:13.853 "compare_and_write": false, 00:10:13.853 "abort": true, 00:10:13.853 "seek_hole": false, 00:10:13.853 "seek_data": false, 00:10:13.853 "copy": true, 00:10:13.853 "nvme_iov_md": false 00:10:13.853 }, 00:10:13.853 "memory_domains": [ 00:10:13.853 { 00:10:13.853 "dma_device_id": "system", 00:10:13.853 "dma_device_type": 1 00:10:13.853 }, 00:10:13.853 { 00:10:13.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.853 "dma_device_type": 2 00:10:13.853 } 00:10:13.853 ], 00:10:13.853 "driver_specific": {} 00:10:13.853 } 00:10:13.853 ] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.853 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.112 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.112 "name": "Existed_Raid", 00:10:14.112 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:14.112 "strip_size_kb": 64, 00:10:14.112 "state": "configuring", 00:10:14.112 "raid_level": "raid0", 00:10:14.112 "superblock": true, 00:10:14.112 "num_base_bdevs": 4, 00:10:14.112 "num_base_bdevs_discovered": 3, 00:10:14.112 "num_base_bdevs_operational": 4, 00:10:14.112 "base_bdevs_list": [ 00:10:14.112 { 00:10:14.112 "name": "BaseBdev1", 00:10:14.112 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:14.112 "is_configured": true, 00:10:14.112 "data_offset": 2048, 00:10:14.112 "data_size": 63488 00:10:14.112 }, 00:10:14.112 { 00:10:14.112 "name": null, 00:10:14.112 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:14.112 "is_configured": false, 00:10:14.112 "data_offset": 0, 00:10:14.112 "data_size": 63488 00:10:14.112 }, 00:10:14.112 { 00:10:14.112 "name": "BaseBdev3", 00:10:14.112 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:14.112 "is_configured": true, 00:10:14.112 "data_offset": 2048, 00:10:14.112 "data_size": 63488 00:10:14.112 }, 00:10:14.112 { 00:10:14.112 "name": "BaseBdev4", 00:10:14.112 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:14.112 "is_configured": true, 00:10:14.112 "data_offset": 2048, 00:10:14.112 "data_size": 63488 00:10:14.112 } 00:10:14.112 ] 00:10:14.112 }' 00:10:14.112 17:02:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.112 17:02:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.372 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.632 [2024-11-20 17:02:38.240264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.632 "name": "Existed_Raid", 00:10:14.632 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:14.632 "strip_size_kb": 64, 00:10:14.632 "state": "configuring", 00:10:14.632 "raid_level": "raid0", 00:10:14.632 "superblock": true, 00:10:14.632 "num_base_bdevs": 4, 00:10:14.632 "num_base_bdevs_discovered": 2, 00:10:14.632 "num_base_bdevs_operational": 4, 00:10:14.632 "base_bdevs_list": [ 00:10:14.632 { 00:10:14.632 "name": "BaseBdev1", 00:10:14.632 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:14.632 "is_configured": true, 00:10:14.632 "data_offset": 2048, 00:10:14.632 "data_size": 63488 00:10:14.632 }, 00:10:14.632 { 00:10:14.632 "name": null, 00:10:14.632 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:14.632 "is_configured": false, 00:10:14.632 "data_offset": 0, 00:10:14.632 "data_size": 63488 00:10:14.632 }, 00:10:14.632 { 00:10:14.632 "name": null, 00:10:14.632 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:14.632 "is_configured": false, 00:10:14.632 "data_offset": 0, 00:10:14.632 "data_size": 63488 00:10:14.632 }, 00:10:14.632 { 00:10:14.632 "name": "BaseBdev4", 00:10:14.632 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:14.632 "is_configured": true, 00:10:14.632 "data_offset": 2048, 00:10:14.632 "data_size": 63488 00:10:14.632 } 00:10:14.632 ] 00:10:14.632 }' 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.632 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.891 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.891 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.891 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.891 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.150 [2024-11-20 17:02:38.804405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.150 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.151 "name": "Existed_Raid", 00:10:15.151 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:15.151 "strip_size_kb": 64, 00:10:15.151 "state": "configuring", 00:10:15.151 "raid_level": "raid0", 00:10:15.151 "superblock": true, 00:10:15.151 "num_base_bdevs": 4, 00:10:15.151 "num_base_bdevs_discovered": 3, 00:10:15.151 "num_base_bdevs_operational": 4, 00:10:15.151 "base_bdevs_list": [ 00:10:15.151 { 00:10:15.151 "name": "BaseBdev1", 00:10:15.151 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:15.151 "is_configured": true, 00:10:15.151 "data_offset": 2048, 00:10:15.151 "data_size": 63488 00:10:15.151 }, 00:10:15.151 { 00:10:15.151 "name": null, 00:10:15.151 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:15.151 "is_configured": false, 00:10:15.151 "data_offset": 0, 00:10:15.151 "data_size": 63488 00:10:15.151 }, 00:10:15.151 { 00:10:15.151 "name": "BaseBdev3", 00:10:15.151 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:15.151 "is_configured": true, 00:10:15.151 "data_offset": 2048, 00:10:15.151 "data_size": 63488 00:10:15.151 }, 00:10:15.151 { 00:10:15.151 "name": "BaseBdev4", 00:10:15.151 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:15.151 "is_configured": true, 00:10:15.151 "data_offset": 2048, 00:10:15.151 "data_size": 63488 00:10:15.151 } 00:10:15.151 ] 00:10:15.151 }' 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.151 17:02:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 [2024-11-20 17:02:39.392724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.719 "name": "Existed_Raid", 00:10:15.719 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:15.719 "strip_size_kb": 64, 00:10:15.719 "state": "configuring", 00:10:15.719 "raid_level": "raid0", 00:10:15.719 "superblock": true, 00:10:15.719 "num_base_bdevs": 4, 00:10:15.719 "num_base_bdevs_discovered": 2, 00:10:15.719 "num_base_bdevs_operational": 4, 00:10:15.719 "base_bdevs_list": [ 00:10:15.719 { 00:10:15.719 "name": null, 00:10:15.719 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:15.719 "is_configured": false, 00:10:15.719 "data_offset": 0, 00:10:15.719 "data_size": 63488 00:10:15.719 }, 00:10:15.719 { 00:10:15.719 "name": null, 00:10:15.719 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:15.719 "is_configured": false, 00:10:15.719 "data_offset": 0, 00:10:15.719 "data_size": 63488 00:10:15.719 }, 00:10:15.719 { 00:10:15.719 "name": "BaseBdev3", 00:10:15.719 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:15.719 "is_configured": true, 00:10:15.719 "data_offset": 2048, 00:10:15.719 "data_size": 63488 00:10:15.719 }, 00:10:15.719 { 00:10:15.719 "name": "BaseBdev4", 00:10:15.719 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:15.719 "is_configured": true, 00:10:15.719 "data_offset": 2048, 00:10:15.719 "data_size": 63488 00:10:15.719 } 00:10:15.719 ] 00:10:15.719 }' 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.719 17:02:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.288 [2024-11-20 17:02:40.068165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.288 "name": "Existed_Raid", 00:10:16.288 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:16.288 "strip_size_kb": 64, 00:10:16.288 "state": "configuring", 00:10:16.288 "raid_level": "raid0", 00:10:16.288 "superblock": true, 00:10:16.288 "num_base_bdevs": 4, 00:10:16.288 "num_base_bdevs_discovered": 3, 00:10:16.288 "num_base_bdevs_operational": 4, 00:10:16.288 "base_bdevs_list": [ 00:10:16.288 { 00:10:16.288 "name": null, 00:10:16.288 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:16.288 "is_configured": false, 00:10:16.288 "data_offset": 0, 00:10:16.288 "data_size": 63488 00:10:16.288 }, 00:10:16.288 { 00:10:16.288 "name": "BaseBdev2", 00:10:16.288 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:16.288 "is_configured": true, 00:10:16.288 "data_offset": 2048, 00:10:16.288 "data_size": 63488 00:10:16.288 }, 00:10:16.288 { 00:10:16.288 "name": "BaseBdev3", 00:10:16.288 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:16.288 "is_configured": true, 00:10:16.288 "data_offset": 2048, 00:10:16.288 "data_size": 63488 00:10:16.288 }, 00:10:16.288 { 00:10:16.288 "name": "BaseBdev4", 00:10:16.288 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:16.288 "is_configured": true, 00:10:16.288 "data_offset": 2048, 00:10:16.288 "data_size": 63488 00:10:16.288 } 00:10:16.288 ] 00:10:16.288 }' 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.288 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aebccf3d-1b14-4ed1-a3c4-2358a7fd5350 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.857 [2024-11-20 17:02:40.714859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.857 [2024-11-20 17:02:40.715166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.857 [2024-11-20 17:02:40.715184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.857 [2024-11-20 17:02:40.715518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:16.857 [2024-11-20 17:02:40.715686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.857 [2024-11-20 17:02:40.715706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:16.857 [2024-11-20 17:02:40.715887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.857 NewBaseBdev 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.857 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 [ 00:10:17.117 { 00:10:17.117 "name": "NewBaseBdev", 00:10:17.117 "aliases": [ 00:10:17.117 "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350" 00:10:17.117 ], 00:10:17.117 "product_name": "Malloc disk", 00:10:17.117 "block_size": 512, 00:10:17.117 "num_blocks": 65536, 00:10:17.117 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:17.117 "assigned_rate_limits": { 00:10:17.117 "rw_ios_per_sec": 0, 00:10:17.117 "rw_mbytes_per_sec": 0, 00:10:17.117 "r_mbytes_per_sec": 0, 00:10:17.117 "w_mbytes_per_sec": 0 00:10:17.117 }, 00:10:17.117 "claimed": true, 00:10:17.117 "claim_type": "exclusive_write", 00:10:17.117 "zoned": false, 00:10:17.117 "supported_io_types": { 00:10:17.117 "read": true, 00:10:17.117 "write": true, 00:10:17.117 "unmap": true, 00:10:17.117 "flush": true, 00:10:17.117 "reset": true, 00:10:17.117 "nvme_admin": false, 00:10:17.117 "nvme_io": false, 00:10:17.117 "nvme_io_md": false, 00:10:17.117 "write_zeroes": true, 00:10:17.117 "zcopy": true, 00:10:17.117 "get_zone_info": false, 00:10:17.117 "zone_management": false, 00:10:17.117 "zone_append": false, 00:10:17.117 "compare": false, 00:10:17.117 "compare_and_write": false, 00:10:17.117 "abort": true, 00:10:17.117 "seek_hole": false, 00:10:17.117 "seek_data": false, 00:10:17.117 "copy": true, 00:10:17.117 "nvme_iov_md": false 00:10:17.117 }, 00:10:17.117 "memory_domains": [ 00:10:17.117 { 00:10:17.117 "dma_device_id": "system", 00:10:17.117 "dma_device_type": 1 00:10:17.117 }, 00:10:17.117 { 00:10:17.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.117 "dma_device_type": 2 00:10:17.117 } 00:10:17.117 ], 00:10:17.117 "driver_specific": {} 00:10:17.117 } 00:10:17.117 ] 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.117 "name": "Existed_Raid", 00:10:17.117 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:17.117 "strip_size_kb": 64, 00:10:17.117 "state": "online", 00:10:17.117 "raid_level": "raid0", 00:10:17.117 "superblock": true, 00:10:17.117 "num_base_bdevs": 4, 00:10:17.117 "num_base_bdevs_discovered": 4, 00:10:17.117 "num_base_bdevs_operational": 4, 00:10:17.117 "base_bdevs_list": [ 00:10:17.117 { 00:10:17.117 "name": "NewBaseBdev", 00:10:17.117 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:17.117 "is_configured": true, 00:10:17.117 "data_offset": 2048, 00:10:17.117 "data_size": 63488 00:10:17.117 }, 00:10:17.117 { 00:10:17.117 "name": "BaseBdev2", 00:10:17.117 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:17.117 "is_configured": true, 00:10:17.117 "data_offset": 2048, 00:10:17.117 "data_size": 63488 00:10:17.117 }, 00:10:17.117 { 00:10:17.117 "name": "BaseBdev3", 00:10:17.117 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:17.117 "is_configured": true, 00:10:17.117 "data_offset": 2048, 00:10:17.117 "data_size": 63488 00:10:17.117 }, 00:10:17.117 { 00:10:17.117 "name": "BaseBdev4", 00:10:17.117 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:17.117 "is_configured": true, 00:10:17.117 "data_offset": 2048, 00:10:17.117 "data_size": 63488 00:10:17.117 } 00:10:17.117 ] 00:10:17.117 }' 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.117 17:02:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.721 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 [2024-11-20 17:02:41.303613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.722 "name": "Existed_Raid", 00:10:17.722 "aliases": [ 00:10:17.722 "ee22149d-2cd7-4199-a706-3d281ecd7a17" 00:10:17.722 ], 00:10:17.722 "product_name": "Raid Volume", 00:10:17.722 "block_size": 512, 00:10:17.722 "num_blocks": 253952, 00:10:17.722 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:17.722 "assigned_rate_limits": { 00:10:17.722 "rw_ios_per_sec": 0, 00:10:17.722 "rw_mbytes_per_sec": 0, 00:10:17.722 "r_mbytes_per_sec": 0, 00:10:17.722 "w_mbytes_per_sec": 0 00:10:17.722 }, 00:10:17.722 "claimed": false, 00:10:17.722 "zoned": false, 00:10:17.722 "supported_io_types": { 00:10:17.722 "read": true, 00:10:17.722 "write": true, 00:10:17.722 "unmap": true, 00:10:17.722 "flush": true, 00:10:17.722 "reset": true, 00:10:17.722 "nvme_admin": false, 00:10:17.722 "nvme_io": false, 00:10:17.722 "nvme_io_md": false, 00:10:17.722 "write_zeroes": true, 00:10:17.722 "zcopy": false, 00:10:17.722 "get_zone_info": false, 00:10:17.722 "zone_management": false, 00:10:17.722 "zone_append": false, 00:10:17.722 "compare": false, 00:10:17.722 "compare_and_write": false, 00:10:17.722 "abort": false, 00:10:17.722 "seek_hole": false, 00:10:17.722 "seek_data": false, 00:10:17.722 "copy": false, 00:10:17.722 "nvme_iov_md": false 00:10:17.722 }, 00:10:17.722 "memory_domains": [ 00:10:17.722 { 00:10:17.722 "dma_device_id": "system", 00:10:17.722 "dma_device_type": 1 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.722 "dma_device_type": 2 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "system", 00:10:17.722 "dma_device_type": 1 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.722 "dma_device_type": 2 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "system", 00:10:17.722 "dma_device_type": 1 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.722 "dma_device_type": 2 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "system", 00:10:17.722 "dma_device_type": 1 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.722 "dma_device_type": 2 00:10:17.722 } 00:10:17.722 ], 00:10:17.722 "driver_specific": { 00:10:17.722 "raid": { 00:10:17.722 "uuid": "ee22149d-2cd7-4199-a706-3d281ecd7a17", 00:10:17.722 "strip_size_kb": 64, 00:10:17.722 "state": "online", 00:10:17.722 "raid_level": "raid0", 00:10:17.722 "superblock": true, 00:10:17.722 "num_base_bdevs": 4, 00:10:17.722 "num_base_bdevs_discovered": 4, 00:10:17.722 "num_base_bdevs_operational": 4, 00:10:17.722 "base_bdevs_list": [ 00:10:17.722 { 00:10:17.722 "name": "NewBaseBdev", 00:10:17.722 "uuid": "aebccf3d-1b14-4ed1-a3c4-2358a7fd5350", 00:10:17.722 "is_configured": true, 00:10:17.722 "data_offset": 2048, 00:10:17.722 "data_size": 63488 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "name": "BaseBdev2", 00:10:17.722 "uuid": "014fd386-59cf-4dc6-8425-0e81b4800d19", 00:10:17.722 "is_configured": true, 00:10:17.722 "data_offset": 2048, 00:10:17.722 "data_size": 63488 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "name": "BaseBdev3", 00:10:17.722 "uuid": "0622333c-4e18-4ec4-bef4-9c05e80a4c5c", 00:10:17.722 "is_configured": true, 00:10:17.722 "data_offset": 2048, 00:10:17.722 "data_size": 63488 00:10:17.722 }, 00:10:17.722 { 00:10:17.722 "name": "BaseBdev4", 00:10:17.722 "uuid": "85cebc15-b7f9-44dc-ba5f-77d3330761e9", 00:10:17.722 "is_configured": true, 00:10:17.722 "data_offset": 2048, 00:10:17.722 "data_size": 63488 00:10:17.722 } 00:10:17.722 ] 00:10:17.722 } 00:10:17.722 } 00:10:17.722 }' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.722 BaseBdev2 00:10:17.722 BaseBdev3 00:10:17.722 BaseBdev4' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.981 [2024-11-20 17:02:41.679226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.981 [2024-11-20 17:02:41.679265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.981 [2024-11-20 17:02:41.679349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.981 [2024-11-20 17:02:41.679453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.981 [2024-11-20 17:02:41.679481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69963 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69963 ']' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69963 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69963 00:10:17.981 killing process with pid 69963 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69963' 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69963 00:10:17.981 [2024-11-20 17:02:41.714363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.981 17:02:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69963 00:10:18.240 [2024-11-20 17:02:42.073829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.618 ************************************ 00:10:19.618 END TEST raid_state_function_test_sb 00:10:19.618 ************************************ 00:10:19.618 17:02:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.618 00:10:19.618 real 0m12.951s 00:10:19.618 user 0m21.514s 00:10:19.618 sys 0m1.720s 00:10:19.618 17:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.618 17:02:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.618 17:02:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:19.618 17:02:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.618 17:02:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.618 17:02:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.618 ************************************ 00:10:19.618 START TEST raid_superblock_test 00:10:19.618 ************************************ 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70644 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70644 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70644 ']' 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.618 17:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.618 [2024-11-20 17:02:43.313056] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:19.618 [2024-11-20 17:02:43.313243] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70644 ] 00:10:19.877 [2024-11-20 17:02:43.502901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.877 [2024-11-20 17:02:43.660601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.136 [2024-11-20 17:02:43.909495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.136 [2024-11-20 17:02:43.909567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 malloc1 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 [2024-11-20 17:02:44.428847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.705 [2024-11-20 17:02:44.428959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.705 [2024-11-20 17:02:44.428994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:20.705 [2024-11-20 17:02:44.429011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.705 [2024-11-20 17:02:44.431886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.705 [2024-11-20 17:02:44.431934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.705 pt1 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 malloc2 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 [2024-11-20 17:02:44.486087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.705 [2024-11-20 17:02:44.486158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.705 [2024-11-20 17:02:44.486196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.705 [2024-11-20 17:02:44.486214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.705 [2024-11-20 17:02:44.489077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.705 [2024-11-20 17:02:44.489124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.705 pt2 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 malloc3 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 [2024-11-20 17:02:44.557033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.705 [2024-11-20 17:02:44.557120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.705 [2024-11-20 17:02:44.557154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.705 [2024-11-20 17:02:44.557171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.705 [2024-11-20 17:02:44.560033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.705 [2024-11-20 17:02:44.560082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.705 pt3 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.705 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.706 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.706 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:20.706 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.706 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 malloc4 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 [2024-11-20 17:02:44.612919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.965 [2024-11-20 17:02:44.613023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.965 [2024-11-20 17:02:44.613056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.965 [2024-11-20 17:02:44.613072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.965 [2024-11-20 17:02:44.615951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.965 [2024-11-20 17:02:44.616010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.965 pt4 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 [2024-11-20 17:02:44.624989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.965 [2024-11-20 17:02:44.627487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.965 [2024-11-20 17:02:44.627619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.965 [2024-11-20 17:02:44.627696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.965 [2024-11-20 17:02:44.627961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.965 [2024-11-20 17:02:44.627980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.965 [2024-11-20 17:02:44.628295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.965 [2024-11-20 17:02:44.628559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.965 [2024-11-20 17:02:44.628594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.965 [2024-11-20 17:02:44.628806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.965 "name": "raid_bdev1", 00:10:20.965 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:20.965 "strip_size_kb": 64, 00:10:20.965 "state": "online", 00:10:20.965 "raid_level": "raid0", 00:10:20.965 "superblock": true, 00:10:20.965 "num_base_bdevs": 4, 00:10:20.965 "num_base_bdevs_discovered": 4, 00:10:20.965 "num_base_bdevs_operational": 4, 00:10:20.965 "base_bdevs_list": [ 00:10:20.965 { 00:10:20.965 "name": "pt1", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt2", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt3", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt4", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 } 00:10:20.965 ] 00:10:20.965 }' 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.965 17:02:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 [2024-11-20 17:02:45.189650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.534 "name": "raid_bdev1", 00:10:21.534 "aliases": [ 00:10:21.534 "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d" 00:10:21.534 ], 00:10:21.534 "product_name": "Raid Volume", 00:10:21.534 "block_size": 512, 00:10:21.534 "num_blocks": 253952, 00:10:21.534 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:21.534 "assigned_rate_limits": { 00:10:21.534 "rw_ios_per_sec": 0, 00:10:21.534 "rw_mbytes_per_sec": 0, 00:10:21.534 "r_mbytes_per_sec": 0, 00:10:21.534 "w_mbytes_per_sec": 0 00:10:21.534 }, 00:10:21.534 "claimed": false, 00:10:21.534 "zoned": false, 00:10:21.534 "supported_io_types": { 00:10:21.534 "read": true, 00:10:21.534 "write": true, 00:10:21.534 "unmap": true, 00:10:21.534 "flush": true, 00:10:21.534 "reset": true, 00:10:21.534 "nvme_admin": false, 00:10:21.534 "nvme_io": false, 00:10:21.534 "nvme_io_md": false, 00:10:21.534 "write_zeroes": true, 00:10:21.534 "zcopy": false, 00:10:21.534 "get_zone_info": false, 00:10:21.534 "zone_management": false, 00:10:21.534 "zone_append": false, 00:10:21.534 "compare": false, 00:10:21.534 "compare_and_write": false, 00:10:21.534 "abort": false, 00:10:21.534 "seek_hole": false, 00:10:21.534 "seek_data": false, 00:10:21.534 "copy": false, 00:10:21.534 "nvme_iov_md": false 00:10:21.534 }, 00:10:21.534 "memory_domains": [ 00:10:21.534 { 00:10:21.534 "dma_device_id": "system", 00:10:21.534 "dma_device_type": 1 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.534 "dma_device_type": 2 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "system", 00:10:21.534 "dma_device_type": 1 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.534 "dma_device_type": 2 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "system", 00:10:21.534 "dma_device_type": 1 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.534 "dma_device_type": 2 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "system", 00:10:21.534 "dma_device_type": 1 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.534 "dma_device_type": 2 00:10:21.534 } 00:10:21.534 ], 00:10:21.534 "driver_specific": { 00:10:21.534 "raid": { 00:10:21.534 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:21.534 "strip_size_kb": 64, 00:10:21.534 "state": "online", 00:10:21.534 "raid_level": "raid0", 00:10:21.534 "superblock": true, 00:10:21.534 "num_base_bdevs": 4, 00:10:21.534 "num_base_bdevs_discovered": 4, 00:10:21.534 "num_base_bdevs_operational": 4, 00:10:21.534 "base_bdevs_list": [ 00:10:21.534 { 00:10:21.534 "name": "pt1", 00:10:21.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.534 "is_configured": true, 00:10:21.534 "data_offset": 2048, 00:10:21.534 "data_size": 63488 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "name": "pt2", 00:10:21.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.534 "is_configured": true, 00:10:21.534 "data_offset": 2048, 00:10:21.534 "data_size": 63488 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "name": "pt3", 00:10:21.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.534 "is_configured": true, 00:10:21.534 "data_offset": 2048, 00:10:21.534 "data_size": 63488 00:10:21.534 }, 00:10:21.534 { 00:10:21.534 "name": "pt4", 00:10:21.534 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.534 "is_configured": true, 00:10:21.534 "data_offset": 2048, 00:10:21.534 "data_size": 63488 00:10:21.534 } 00:10:21.534 ] 00:10:21.534 } 00:10:21.534 } 00:10:21.534 }' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.534 pt2 00:10:21.534 pt3 00:10:21.534 pt4' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.794 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.794 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.794 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.794 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.795 [2024-11-20 17:02:45.561591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d ']' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.795 [2024-11-20 17:02:45.617292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.795 [2024-11-20 17:02:45.617321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.795 [2024-11-20 17:02:45.617421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.795 [2024-11-20 17:02:45.617546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.795 [2024-11-20 17:02:45.617578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.795 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 [2024-11-20 17:02:45.777369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:22.055 [2024-11-20 17:02:45.780044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:22.055 [2024-11-20 17:02:45.780145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:22.055 [2024-11-20 17:02:45.780203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:22.055 [2024-11-20 17:02:45.780273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:22.055 [2024-11-20 17:02:45.780403] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:22.055 [2024-11-20 17:02:45.780438] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:22.055 [2024-11-20 17:02:45.780471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:22.055 [2024-11-20 17:02:45.780499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.055 [2024-11-20 17:02:45.780519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:22.055 request: 00:10:22.055 { 00:10:22.055 "name": "raid_bdev1", 00:10:22.055 "raid_level": "raid0", 00:10:22.055 "base_bdevs": [ 00:10:22.055 "malloc1", 00:10:22.055 "malloc2", 00:10:22.055 "malloc3", 00:10:22.055 "malloc4" 00:10:22.055 ], 00:10:22.055 "strip_size_kb": 64, 00:10:22.055 "superblock": false, 00:10:22.055 "method": "bdev_raid_create", 00:10:22.055 "req_id": 1 00:10:22.055 } 00:10:22.055 Got JSON-RPC error response 00:10:22.055 response: 00:10:22.055 { 00:10:22.055 "code": -17, 00:10:22.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:22.055 } 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 [2024-11-20 17:02:45.845355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.055 [2024-11-20 17:02:45.845427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.055 [2024-11-20 17:02:45.845452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.055 [2024-11-20 17:02:45.845484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.055 [2024-11-20 17:02:45.848385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.055 [2024-11-20 17:02:45.848446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.055 [2024-11-20 17:02:45.848559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.055 [2024-11-20 17:02:45.848629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.055 pt1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.055 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.056 "name": "raid_bdev1", 00:10:22.056 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:22.056 "strip_size_kb": 64, 00:10:22.056 "state": "configuring", 00:10:22.056 "raid_level": "raid0", 00:10:22.056 "superblock": true, 00:10:22.056 "num_base_bdevs": 4, 00:10:22.056 "num_base_bdevs_discovered": 1, 00:10:22.056 "num_base_bdevs_operational": 4, 00:10:22.056 "base_bdevs_list": [ 00:10:22.056 { 00:10:22.056 "name": "pt1", 00:10:22.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.056 "is_configured": true, 00:10:22.056 "data_offset": 2048, 00:10:22.056 "data_size": 63488 00:10:22.056 }, 00:10:22.056 { 00:10:22.056 "name": null, 00:10:22.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.056 "is_configured": false, 00:10:22.056 "data_offset": 2048, 00:10:22.056 "data_size": 63488 00:10:22.056 }, 00:10:22.056 { 00:10:22.056 "name": null, 00:10:22.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.056 "is_configured": false, 00:10:22.056 "data_offset": 2048, 00:10:22.056 "data_size": 63488 00:10:22.056 }, 00:10:22.056 { 00:10:22.056 "name": null, 00:10:22.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.056 "is_configured": false, 00:10:22.056 "data_offset": 2048, 00:10:22.056 "data_size": 63488 00:10:22.056 } 00:10:22.056 ] 00:10:22.056 }' 00:10:22.056 17:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.056 17:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.625 [2024-11-20 17:02:46.373595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.625 [2024-11-20 17:02:46.373689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.625 [2024-11-20 17:02:46.373719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:22.625 [2024-11-20 17:02:46.373739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.625 [2024-11-20 17:02:46.374324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.625 [2024-11-20 17:02:46.374375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.625 [2024-11-20 17:02:46.374482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.625 [2024-11-20 17:02:46.374531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.625 pt2 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.625 [2024-11-20 17:02:46.381652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.625 "name": "raid_bdev1", 00:10:22.625 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:22.625 "strip_size_kb": 64, 00:10:22.625 "state": "configuring", 00:10:22.625 "raid_level": "raid0", 00:10:22.625 "superblock": true, 00:10:22.625 "num_base_bdevs": 4, 00:10:22.625 "num_base_bdevs_discovered": 1, 00:10:22.625 "num_base_bdevs_operational": 4, 00:10:22.625 "base_bdevs_list": [ 00:10:22.625 { 00:10:22.625 "name": "pt1", 00:10:22.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.625 "is_configured": true, 00:10:22.625 "data_offset": 2048, 00:10:22.625 "data_size": 63488 00:10:22.625 }, 00:10:22.625 { 00:10:22.625 "name": null, 00:10:22.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.625 "is_configured": false, 00:10:22.625 "data_offset": 0, 00:10:22.625 "data_size": 63488 00:10:22.625 }, 00:10:22.625 { 00:10:22.625 "name": null, 00:10:22.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.625 "is_configured": false, 00:10:22.625 "data_offset": 2048, 00:10:22.625 "data_size": 63488 00:10:22.625 }, 00:10:22.625 { 00:10:22.625 "name": null, 00:10:22.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.625 "is_configured": false, 00:10:22.625 "data_offset": 2048, 00:10:22.625 "data_size": 63488 00:10:22.625 } 00:10:22.625 ] 00:10:22.625 }' 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.625 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 [2024-11-20 17:02:46.917853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.193 [2024-11-20 17:02:46.917936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.193 [2024-11-20 17:02:46.917970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:23.193 [2024-11-20 17:02:46.917986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.193 [2024-11-20 17:02:46.918571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.193 [2024-11-20 17:02:46.918599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.193 [2024-11-20 17:02:46.918715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.193 [2024-11-20 17:02:46.918771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.193 pt2 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 [2024-11-20 17:02:46.925765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:23.193 [2024-11-20 17:02:46.925834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.193 [2024-11-20 17:02:46.925863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:23.193 [2024-11-20 17:02:46.925878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.193 [2024-11-20 17:02:46.926348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.193 [2024-11-20 17:02:46.926381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:23.193 [2024-11-20 17:02:46.926494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:23.193 [2024-11-20 17:02:46.926546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:23.193 pt3 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 [2024-11-20 17:02:46.933735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:23.193 [2024-11-20 17:02:46.933816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.193 [2024-11-20 17:02:46.933857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:23.193 [2024-11-20 17:02:46.933871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.193 [2024-11-20 17:02:46.934339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.193 [2024-11-20 17:02:46.934386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:23.193 [2024-11-20 17:02:46.934473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:23.193 [2024-11-20 17:02:46.934507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:23.193 [2024-11-20 17:02:46.934677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:23.193 [2024-11-20 17:02:46.934696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:23.193 [2024-11-20 17:02:46.935026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:23.193 [2024-11-20 17:02:46.935228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:23.193 [2024-11-20 17:02:46.935252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:23.193 [2024-11-20 17:02:46.935452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.193 pt4 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.193 "name": "raid_bdev1", 00:10:23.193 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:23.193 "strip_size_kb": 64, 00:10:23.193 "state": "online", 00:10:23.193 "raid_level": "raid0", 00:10:23.193 "superblock": true, 00:10:23.193 "num_base_bdevs": 4, 00:10:23.193 "num_base_bdevs_discovered": 4, 00:10:23.193 "num_base_bdevs_operational": 4, 00:10:23.193 "base_bdevs_list": [ 00:10:23.193 { 00:10:23.193 "name": "pt1", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "pt2", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "pt3", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 }, 00:10:23.193 { 00:10:23.193 "name": "pt4", 00:10:23.193 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.193 "is_configured": true, 00:10:23.193 "data_offset": 2048, 00:10:23.193 "data_size": 63488 00:10:23.193 } 00:10:23.193 ] 00:10:23.193 }' 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.193 17:02:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.760 [2024-11-20 17:02:47.442402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.760 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.760 "name": "raid_bdev1", 00:10:23.760 "aliases": [ 00:10:23.760 "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d" 00:10:23.760 ], 00:10:23.760 "product_name": "Raid Volume", 00:10:23.760 "block_size": 512, 00:10:23.760 "num_blocks": 253952, 00:10:23.760 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:23.760 "assigned_rate_limits": { 00:10:23.760 "rw_ios_per_sec": 0, 00:10:23.760 "rw_mbytes_per_sec": 0, 00:10:23.760 "r_mbytes_per_sec": 0, 00:10:23.760 "w_mbytes_per_sec": 0 00:10:23.760 }, 00:10:23.760 "claimed": false, 00:10:23.760 "zoned": false, 00:10:23.760 "supported_io_types": { 00:10:23.760 "read": true, 00:10:23.760 "write": true, 00:10:23.760 "unmap": true, 00:10:23.760 "flush": true, 00:10:23.760 "reset": true, 00:10:23.760 "nvme_admin": false, 00:10:23.760 "nvme_io": false, 00:10:23.760 "nvme_io_md": false, 00:10:23.760 "write_zeroes": true, 00:10:23.760 "zcopy": false, 00:10:23.760 "get_zone_info": false, 00:10:23.760 "zone_management": false, 00:10:23.760 "zone_append": false, 00:10:23.760 "compare": false, 00:10:23.760 "compare_and_write": false, 00:10:23.760 "abort": false, 00:10:23.760 "seek_hole": false, 00:10:23.760 "seek_data": false, 00:10:23.760 "copy": false, 00:10:23.760 "nvme_iov_md": false 00:10:23.760 }, 00:10:23.760 "memory_domains": [ 00:10:23.760 { 00:10:23.760 "dma_device_id": "system", 00:10:23.760 "dma_device_type": 1 00:10:23.760 }, 00:10:23.760 { 00:10:23.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.760 "dma_device_type": 2 00:10:23.760 }, 00:10:23.760 { 00:10:23.760 "dma_device_id": "system", 00:10:23.760 "dma_device_type": 1 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.761 "dma_device_type": 2 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "dma_device_id": "system", 00:10:23.761 "dma_device_type": 1 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.761 "dma_device_type": 2 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "dma_device_id": "system", 00:10:23.761 "dma_device_type": 1 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.761 "dma_device_type": 2 00:10:23.761 } 00:10:23.761 ], 00:10:23.761 "driver_specific": { 00:10:23.761 "raid": { 00:10:23.761 "uuid": "2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d", 00:10:23.761 "strip_size_kb": 64, 00:10:23.761 "state": "online", 00:10:23.761 "raid_level": "raid0", 00:10:23.761 "superblock": true, 00:10:23.761 "num_base_bdevs": 4, 00:10:23.761 "num_base_bdevs_discovered": 4, 00:10:23.761 "num_base_bdevs_operational": 4, 00:10:23.761 "base_bdevs_list": [ 00:10:23.761 { 00:10:23.761 "name": "pt1", 00:10:23.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.761 "is_configured": true, 00:10:23.761 "data_offset": 2048, 00:10:23.761 "data_size": 63488 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "name": "pt2", 00:10:23.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.761 "is_configured": true, 00:10:23.761 "data_offset": 2048, 00:10:23.761 "data_size": 63488 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "name": "pt3", 00:10:23.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.761 "is_configured": true, 00:10:23.761 "data_offset": 2048, 00:10:23.761 "data_size": 63488 00:10:23.761 }, 00:10:23.761 { 00:10:23.761 "name": "pt4", 00:10:23.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.761 "is_configured": true, 00:10:23.761 "data_offset": 2048, 00:10:23.761 "data_size": 63488 00:10:23.761 } 00:10:23.761 ] 00:10:23.761 } 00:10:23.761 } 00:10:23.761 }' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.761 pt2 00:10:23.761 pt3 00:10:23.761 pt4' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.761 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:24.021 [2024-11-20 17:02:47.810394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d '!=' 2c5cfbfb-dfbf-4f62-8c46-6fa4adf1241d ']' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70644 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70644 ']' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70644 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.021 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70644 00:10:24.281 killing process with pid 70644 00:10:24.281 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.281 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.281 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70644' 00:10:24.281 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70644 00:10:24.281 [2024-11-20 17:02:47.887541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.281 17:02:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70644 00:10:24.281 [2024-11-20 17:02:47.887652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.281 [2024-11-20 17:02:47.887754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.281 [2024-11-20 17:02:47.887788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:24.540 [2024-11-20 17:02:48.252837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.511 ************************************ 00:10:25.511 END TEST raid_superblock_test 00:10:25.511 ************************************ 00:10:25.511 17:02:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:25.511 00:10:25.511 real 0m6.149s 00:10:25.511 user 0m9.219s 00:10:25.511 sys 0m0.914s 00:10:25.511 17:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.511 17:02:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.777 17:02:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:25.777 17:02:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.777 17:02:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.777 17:02:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.777 ************************************ 00:10:25.777 START TEST raid_read_error_test 00:10:25.777 ************************************ 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AID2ZLjOOp 00:10:25.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70914 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70914 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70914 ']' 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.777 17:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.777 [2024-11-20 17:02:49.532363] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:25.778 [2024-11-20 17:02:49.532800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:10:26.037 [2024-11-20 17:02:49.724903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.037 [2024-11-20 17:02:49.886425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.296 [2024-11-20 17:02:50.112035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.296 [2024-11-20 17:02:50.112085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 BaseBdev1_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 true 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-20 17:02:50.629392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.863 [2024-11-20 17:02:50.629499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.863 [2024-11-20 17:02:50.629546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.863 [2024-11-20 17:02:50.629585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.863 [2024-11-20 17:02:50.632753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.863 [2024-11-20 17:02:50.632839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.863 BaseBdev1 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 BaseBdev2_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 true 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 [2024-11-20 17:02:50.700788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.863 [2024-11-20 17:02:50.701007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.863 [2024-11-20 17:02:50.701053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.863 [2024-11-20 17:02:50.701081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.863 [2024-11-20 17:02:50.704232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.863 [2024-11-20 17:02:50.704437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.863 BaseBdev2 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 BaseBdev3_malloc 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 true 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 [2024-11-20 17:02:50.777378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.123 [2024-11-20 17:02:50.777449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.123 [2024-11-20 17:02:50.777495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:27.123 [2024-11-20 17:02:50.777523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.123 [2024-11-20 17:02:50.780743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.123 [2024-11-20 17:02:50.780936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.123 BaseBdev3 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 BaseBdev4_malloc 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 true 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 [2024-11-20 17:02:50.843976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:27.123 [2024-11-20 17:02:50.844286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.123 [2024-11-20 17:02:50.844336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:27.123 [2024-11-20 17:02:50.844365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.123 [2024-11-20 17:02:50.847536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.123 [2024-11-20 17:02:50.847707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:27.123 BaseBdev4 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 [2024-11-20 17:02:50.852147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.123 [2024-11-20 17:02:50.854904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.123 [2024-11-20 17:02:50.855186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.123 [2024-11-20 17:02:50.855299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.123 [2024-11-20 17:02:50.855603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:27.123 [2024-11-20 17:02:50.855627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:27.123 [2024-11-20 17:02:50.855958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:27.123 [2024-11-20 17:02:50.856277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:27.123 [2024-11-20 17:02:50.856301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:27.123 [2024-11-20 17:02:50.856548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.123 "name": "raid_bdev1", 00:10:27.123 "uuid": "72a105bc-b09f-4c53-9745-a63c8837c3eb", 00:10:27.123 "strip_size_kb": 64, 00:10:27.123 "state": "online", 00:10:27.123 "raid_level": "raid0", 00:10:27.123 "superblock": true, 00:10:27.123 "num_base_bdevs": 4, 00:10:27.123 "num_base_bdevs_discovered": 4, 00:10:27.123 "num_base_bdevs_operational": 4, 00:10:27.123 "base_bdevs_list": [ 00:10:27.123 { 00:10:27.123 "name": "BaseBdev1", 00:10:27.123 "uuid": "41c41164-5083-5520-8aaf-ab82dab445fb", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev2", 00:10:27.123 "uuid": "6e90cf86-9350-5389-bed7-8d4604864ce0", 00:10:27.123 "is_configured": true, 00:10:27.124 "data_offset": 2048, 00:10:27.124 "data_size": 63488 00:10:27.124 }, 00:10:27.124 { 00:10:27.124 "name": "BaseBdev3", 00:10:27.124 "uuid": "999a795c-1c6f-502f-8131-75da06eacb71", 00:10:27.124 "is_configured": true, 00:10:27.124 "data_offset": 2048, 00:10:27.124 "data_size": 63488 00:10:27.124 }, 00:10:27.124 { 00:10:27.124 "name": "BaseBdev4", 00:10:27.124 "uuid": "aaab2ab4-a70a-50c7-a404-343e6f4f61ed", 00:10:27.124 "is_configured": true, 00:10:27.124 "data_offset": 2048, 00:10:27.124 "data_size": 63488 00:10:27.124 } 00:10:27.124 ] 00:10:27.124 }' 00:10:27.124 17:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.124 17:02:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.691 17:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.691 17:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.691 [2024-11-20 17:02:51.469964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.628 "name": "raid_bdev1", 00:10:28.628 "uuid": "72a105bc-b09f-4c53-9745-a63c8837c3eb", 00:10:28.628 "strip_size_kb": 64, 00:10:28.628 "state": "online", 00:10:28.628 "raid_level": "raid0", 00:10:28.628 "superblock": true, 00:10:28.628 "num_base_bdevs": 4, 00:10:28.628 "num_base_bdevs_discovered": 4, 00:10:28.628 "num_base_bdevs_operational": 4, 00:10:28.628 "base_bdevs_list": [ 00:10:28.628 { 00:10:28.628 "name": "BaseBdev1", 00:10:28.628 "uuid": "41c41164-5083-5520-8aaf-ab82dab445fb", 00:10:28.628 "is_configured": true, 00:10:28.628 "data_offset": 2048, 00:10:28.628 "data_size": 63488 00:10:28.628 }, 00:10:28.628 { 00:10:28.628 "name": "BaseBdev2", 00:10:28.628 "uuid": "6e90cf86-9350-5389-bed7-8d4604864ce0", 00:10:28.628 "is_configured": true, 00:10:28.628 "data_offset": 2048, 00:10:28.628 "data_size": 63488 00:10:28.628 }, 00:10:28.628 { 00:10:28.628 "name": "BaseBdev3", 00:10:28.628 "uuid": "999a795c-1c6f-502f-8131-75da06eacb71", 00:10:28.628 "is_configured": true, 00:10:28.628 "data_offset": 2048, 00:10:28.628 "data_size": 63488 00:10:28.628 }, 00:10:28.628 { 00:10:28.628 "name": "BaseBdev4", 00:10:28.628 "uuid": "aaab2ab4-a70a-50c7-a404-343e6f4f61ed", 00:10:28.628 "is_configured": true, 00:10:28.628 "data_offset": 2048, 00:10:28.628 "data_size": 63488 00:10:28.628 } 00:10:28.628 ] 00:10:28.628 }' 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.628 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.197 [2024-11-20 17:02:52.925166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.197 [2024-11-20 17:02:52.925412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.197 [2024-11-20 17:02:52.928917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.197 { 00:10:29.197 "results": [ 00:10:29.197 { 00:10:29.197 "job": "raid_bdev1", 00:10:29.197 "core_mask": "0x1", 00:10:29.197 "workload": "randrw", 00:10:29.197 "percentage": 50, 00:10:29.197 "status": "finished", 00:10:29.197 "queue_depth": 1, 00:10:29.197 "io_size": 131072, 00:10:29.197 "runtime": 1.453037, 00:10:29.197 "iops": 10913.69318193549, 00:10:29.197 "mibps": 1364.2116477419363, 00:10:29.197 "io_failed": 1, 00:10:29.197 "io_timeout": 0, 00:10:29.197 "avg_latency_us": 126.93628418620914, 00:10:29.197 "min_latency_us": 37.236363636363635, 00:10:29.197 "max_latency_us": 1809.6872727272728 00:10:29.197 } 00:10:29.197 ], 00:10:29.197 "core_count": 1 00:10:29.197 } 00:10:29.197 [2024-11-20 17:02:52.929152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.197 [2024-11-20 17:02:52.929253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.197 [2024-11-20 17:02:52.929272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70914 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70914 ']' 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70914 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70914 00:10:29.197 killing process with pid 70914 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70914' 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70914 00:10:29.197 [2024-11-20 17:02:52.965388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.197 17:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70914 00:10:29.457 [2024-11-20 17:02:53.222478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AID2ZLjOOp 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:30.835 ************************************ 00:10:30.835 END TEST raid_read_error_test 00:10:30.835 ************************************ 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:30.835 00:10:30.835 real 0m4.959s 00:10:30.835 user 0m6.137s 00:10:30.835 sys 0m0.613s 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.835 17:02:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.835 17:02:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:30.835 17:02:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:30.835 17:02:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.835 17:02:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.835 ************************************ 00:10:30.835 START TEST raid_write_error_test 00:10:30.835 ************************************ 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y8lY4ztjcO 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71060 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71060 00:10:30.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71060 ']' 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.835 17:02:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.835 [2024-11-20 17:02:54.549491] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:30.835 [2024-11-20 17:02:54.549692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:10:31.095 [2024-11-20 17:02:54.738424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.095 [2024-11-20 17:02:54.923467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.354 [2024-11-20 17:02:55.174855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.354 [2024-11-20 17:02:55.174956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.922 BaseBdev1_malloc 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.922 true 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.922 [2024-11-20 17:02:55.661061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.922 [2024-11-20 17:02:55.661144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.922 [2024-11-20 17:02:55.661181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.922 [2024-11-20 17:02:55.661207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.922 [2024-11-20 17:02:55.664345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.922 [2024-11-20 17:02:55.664597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.922 BaseBdev1 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.922 BaseBdev2_malloc 00:10:31.922 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.923 true 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.923 [2024-11-20 17:02:55.728006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.923 [2024-11-20 17:02:55.728203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.923 [2024-11-20 17:02:55.728239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.923 [2024-11-20 17:02:55.728257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.923 [2024-11-20 17:02:55.731312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.923 [2024-11-20 17:02:55.731485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.923 BaseBdev2 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.923 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 BaseBdev3_malloc 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 true 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 [2024-11-20 17:02:55.815906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.182 [2024-11-20 17:02:55.815998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.182 [2024-11-20 17:02:55.816066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.182 [2024-11-20 17:02:55.816093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.182 [2024-11-20 17:02:55.819161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.182 [2024-11-20 17:02:55.819214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.182 BaseBdev3 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 BaseBdev4_malloc 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 true 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.182 [2024-11-20 17:02:55.879267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:32.182 [2024-11-20 17:02:55.879360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.182 [2024-11-20 17:02:55.879400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:32.182 [2024-11-20 17:02:55.879416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.182 [2024-11-20 17:02:55.882453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.182 [2024-11-20 17:02:55.882520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:32.182 BaseBdev4 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:32.182 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 [2024-11-20 17:02:55.887440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.183 [2024-11-20 17:02:55.890403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.183 [2024-11-20 17:02:55.890686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.183 [2024-11-20 17:02:55.890882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:32.183 [2024-11-20 17:02:55.891281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:32.183 [2024-11-20 17:02:55.891439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:32.183 [2024-11-20 17:02:55.891822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:32.183 [2024-11-20 17:02:55.892220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:32.183 [2024-11-20 17:02:55.892346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:32.183 [2024-11-20 17:02:55.892745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.183 "name": "raid_bdev1", 00:10:32.183 "uuid": "fdbd14f5-f2f7-483f-9532-53915b395850", 00:10:32.183 "strip_size_kb": 64, 00:10:32.183 "state": "online", 00:10:32.183 "raid_level": "raid0", 00:10:32.183 "superblock": true, 00:10:32.183 "num_base_bdevs": 4, 00:10:32.183 "num_base_bdevs_discovered": 4, 00:10:32.183 "num_base_bdevs_operational": 4, 00:10:32.183 "base_bdevs_list": [ 00:10:32.183 { 00:10:32.183 "name": "BaseBdev1", 00:10:32.183 "uuid": "06d634a5-b86a-5d90-8a8b-5351562b50f8", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 2048, 00:10:32.183 "data_size": 63488 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": "BaseBdev2", 00:10:32.183 "uuid": "787660a5-a101-5024-85af-e4c809b44c94", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 2048, 00:10:32.183 "data_size": 63488 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": "BaseBdev3", 00:10:32.183 "uuid": "e4bfc352-a8f5-579d-adf0-84aec78ca888", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 2048, 00:10:32.183 "data_size": 63488 00:10:32.183 }, 00:10:32.183 { 00:10:32.183 "name": "BaseBdev4", 00:10:32.183 "uuid": "e0f4a1df-a0ee-5675-973c-da1be2f0880a", 00:10:32.183 "is_configured": true, 00:10:32.183 "data_offset": 2048, 00:10:32.183 "data_size": 63488 00:10:32.183 } 00:10:32.183 ] 00:10:32.183 }' 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.183 17:02:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.752 17:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:32.752 17:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:32.752 [2024-11-20 17:02:56.545128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.691 "name": "raid_bdev1", 00:10:33.691 "uuid": "fdbd14f5-f2f7-483f-9532-53915b395850", 00:10:33.691 "strip_size_kb": 64, 00:10:33.691 "state": "online", 00:10:33.691 "raid_level": "raid0", 00:10:33.691 "superblock": true, 00:10:33.691 "num_base_bdevs": 4, 00:10:33.691 "num_base_bdevs_discovered": 4, 00:10:33.691 "num_base_bdevs_operational": 4, 00:10:33.691 "base_bdevs_list": [ 00:10:33.691 { 00:10:33.691 "name": "BaseBdev1", 00:10:33.691 "uuid": "06d634a5-b86a-5d90-8a8b-5351562b50f8", 00:10:33.691 "is_configured": true, 00:10:33.691 "data_offset": 2048, 00:10:33.691 "data_size": 63488 00:10:33.691 }, 00:10:33.691 { 00:10:33.691 "name": "BaseBdev2", 00:10:33.691 "uuid": "787660a5-a101-5024-85af-e4c809b44c94", 00:10:33.691 "is_configured": true, 00:10:33.691 "data_offset": 2048, 00:10:33.691 "data_size": 63488 00:10:33.691 }, 00:10:33.691 { 00:10:33.691 "name": "BaseBdev3", 00:10:33.691 "uuid": "e4bfc352-a8f5-579d-adf0-84aec78ca888", 00:10:33.691 "is_configured": true, 00:10:33.691 "data_offset": 2048, 00:10:33.691 "data_size": 63488 00:10:33.691 }, 00:10:33.691 { 00:10:33.691 "name": "BaseBdev4", 00:10:33.691 "uuid": "e0f4a1df-a0ee-5675-973c-da1be2f0880a", 00:10:33.691 "is_configured": true, 00:10:33.691 "data_offset": 2048, 00:10:33.691 "data_size": 63488 00:10:33.691 } 00:10:33.691 ] 00:10:33.691 }' 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.691 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.260 [2024-11-20 17:02:57.960020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.260 [2024-11-20 17:02:57.960197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.260 { 00:10:34.260 "results": [ 00:10:34.260 { 00:10:34.260 "job": "raid_bdev1", 00:10:34.260 "core_mask": "0x1", 00:10:34.260 "workload": "randrw", 00:10:34.260 "percentage": 50, 00:10:34.260 "status": "finished", 00:10:34.260 "queue_depth": 1, 00:10:34.260 "io_size": 131072, 00:10:34.260 "runtime": 1.41236, 00:10:34.260 "iops": 9788.580815089637, 00:10:34.260 "mibps": 1223.5726018862047, 00:10:34.260 "io_failed": 1, 00:10:34.260 "io_timeout": 0, 00:10:34.260 "avg_latency_us": 142.42914982312638, 00:10:34.260 "min_latency_us": 37.93454545454546, 00:10:34.260 "max_latency_us": 1936.290909090909 00:10:34.260 } 00:10:34.260 ], 00:10:34.260 "core_count": 1 00:10:34.260 } 00:10:34.260 [2024-11-20 17:02:57.963886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.260 [2024-11-20 17:02:57.963976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.260 [2024-11-20 17:02:57.964048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.260 [2024-11-20 17:02:57.964067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71060 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71060 ']' 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71060 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.260 17:02:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71060 00:10:34.260 killing process with pid 71060 00:10:34.260 17:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.260 17:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.260 17:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71060' 00:10:34.260 17:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71060 00:10:34.260 [2024-11-20 17:02:58.003961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.260 17:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71060 00:10:34.519 [2024-11-20 17:02:58.305970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y8lY4ztjcO 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.909 ************************************ 00:10:35.909 END TEST raid_write_error_test 00:10:35.909 ************************************ 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:35.909 00:10:35.909 real 0m4.938s 00:10:35.909 user 0m6.083s 00:10:35.909 sys 0m0.664s 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.909 17:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.909 17:02:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:35.909 17:02:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:35.909 17:02:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:35.909 17:02:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.909 17:02:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.909 ************************************ 00:10:35.909 START TEST raid_state_function_test 00:10:35.909 ************************************ 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:35.909 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:35.910 Process raid pid: 71211 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71211 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71211' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71211 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71211 ']' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.910 17:02:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.910 [2024-11-20 17:02:59.563561] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:35.910 [2024-11-20 17:02:59.563769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.910 [2024-11-20 17:02:59.737156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.168 [2024-11-20 17:02:59.865840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.427 [2024-11-20 17:03:00.068221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.427 [2024-11-20 17:03:00.068263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.686 [2024-11-20 17:03:00.525116] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.686 [2024-11-20 17:03:00.525237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.686 [2024-11-20 17:03:00.525255] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.686 [2024-11-20 17:03:00.525270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.686 [2024-11-20 17:03:00.525280] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.686 [2024-11-20 17:03:00.525294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.686 [2024-11-20 17:03:00.525304] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.686 [2024-11-20 17:03:00.525317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.686 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.945 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.945 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.945 "name": "Existed_Raid", 00:10:36.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.945 "strip_size_kb": 64, 00:10:36.945 "state": "configuring", 00:10:36.945 "raid_level": "concat", 00:10:36.945 "superblock": false, 00:10:36.945 "num_base_bdevs": 4, 00:10:36.945 "num_base_bdevs_discovered": 0, 00:10:36.945 "num_base_bdevs_operational": 4, 00:10:36.945 "base_bdevs_list": [ 00:10:36.945 { 00:10:36.945 "name": "BaseBdev1", 00:10:36.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.945 "is_configured": false, 00:10:36.945 "data_offset": 0, 00:10:36.945 "data_size": 0 00:10:36.945 }, 00:10:36.945 { 00:10:36.945 "name": "BaseBdev2", 00:10:36.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.945 "is_configured": false, 00:10:36.945 "data_offset": 0, 00:10:36.945 "data_size": 0 00:10:36.945 }, 00:10:36.945 { 00:10:36.945 "name": "BaseBdev3", 00:10:36.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.945 "is_configured": false, 00:10:36.945 "data_offset": 0, 00:10:36.945 "data_size": 0 00:10:36.945 }, 00:10:36.945 { 00:10:36.945 "name": "BaseBdev4", 00:10:36.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.945 "is_configured": false, 00:10:36.945 "data_offset": 0, 00:10:36.945 "data_size": 0 00:10:36.945 } 00:10:36.945 ] 00:10:36.945 }' 00:10:36.945 17:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.945 17:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 [2024-11-20 17:03:01.033220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.204 [2024-11-20 17:03:01.033263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.204 [2024-11-20 17:03:01.041188] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.204 [2024-11-20 17:03:01.041241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.204 [2024-11-20 17:03:01.041256] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.204 [2024-11-20 17:03:01.041273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.204 [2024-11-20 17:03:01.041283] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.204 [2024-11-20 17:03:01.041298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.204 [2024-11-20 17:03:01.041307] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.204 [2024-11-20 17:03:01.041322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.204 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.463 [2024-11-20 17:03:01.084504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.463 BaseBdev1 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.463 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.464 [ 00:10:37.464 { 00:10:37.464 "name": "BaseBdev1", 00:10:37.464 "aliases": [ 00:10:37.464 "597d1b8a-21aa-451f-8130-6106757ed134" 00:10:37.464 ], 00:10:37.464 "product_name": "Malloc disk", 00:10:37.464 "block_size": 512, 00:10:37.464 "num_blocks": 65536, 00:10:37.464 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:37.464 "assigned_rate_limits": { 00:10:37.464 "rw_ios_per_sec": 0, 00:10:37.464 "rw_mbytes_per_sec": 0, 00:10:37.464 "r_mbytes_per_sec": 0, 00:10:37.464 "w_mbytes_per_sec": 0 00:10:37.464 }, 00:10:37.464 "claimed": true, 00:10:37.464 "claim_type": "exclusive_write", 00:10:37.464 "zoned": false, 00:10:37.464 "supported_io_types": { 00:10:37.464 "read": true, 00:10:37.464 "write": true, 00:10:37.464 "unmap": true, 00:10:37.464 "flush": true, 00:10:37.464 "reset": true, 00:10:37.464 "nvme_admin": false, 00:10:37.464 "nvme_io": false, 00:10:37.464 "nvme_io_md": false, 00:10:37.464 "write_zeroes": true, 00:10:37.464 "zcopy": true, 00:10:37.464 "get_zone_info": false, 00:10:37.464 "zone_management": false, 00:10:37.464 "zone_append": false, 00:10:37.464 "compare": false, 00:10:37.464 "compare_and_write": false, 00:10:37.464 "abort": true, 00:10:37.464 "seek_hole": false, 00:10:37.464 "seek_data": false, 00:10:37.464 "copy": true, 00:10:37.464 "nvme_iov_md": false 00:10:37.464 }, 00:10:37.464 "memory_domains": [ 00:10:37.464 { 00:10:37.464 "dma_device_id": "system", 00:10:37.464 "dma_device_type": 1 00:10:37.464 }, 00:10:37.464 { 00:10:37.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.464 "dma_device_type": 2 00:10:37.464 } 00:10:37.464 ], 00:10:37.464 "driver_specific": {} 00:10:37.464 } 00:10:37.464 ] 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.464 "name": "Existed_Raid", 00:10:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.464 "strip_size_kb": 64, 00:10:37.464 "state": "configuring", 00:10:37.464 "raid_level": "concat", 00:10:37.464 "superblock": false, 00:10:37.464 "num_base_bdevs": 4, 00:10:37.464 "num_base_bdevs_discovered": 1, 00:10:37.464 "num_base_bdevs_operational": 4, 00:10:37.464 "base_bdevs_list": [ 00:10:37.464 { 00:10:37.464 "name": "BaseBdev1", 00:10:37.464 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:37.464 "is_configured": true, 00:10:37.464 "data_offset": 0, 00:10:37.464 "data_size": 65536 00:10:37.464 }, 00:10:37.464 { 00:10:37.464 "name": "BaseBdev2", 00:10:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.464 "is_configured": false, 00:10:37.464 "data_offset": 0, 00:10:37.464 "data_size": 0 00:10:37.464 }, 00:10:37.464 { 00:10:37.464 "name": "BaseBdev3", 00:10:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.464 "is_configured": false, 00:10:37.464 "data_offset": 0, 00:10:37.464 "data_size": 0 00:10:37.464 }, 00:10:37.464 { 00:10:37.464 "name": "BaseBdev4", 00:10:37.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.464 "is_configured": false, 00:10:37.464 "data_offset": 0, 00:10:37.464 "data_size": 0 00:10:37.464 } 00:10:37.464 ] 00:10:37.464 }' 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.464 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 [2024-11-20 17:03:01.612730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.032 [2024-11-20 17:03:01.612813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 [2024-11-20 17:03:01.620824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.032 [2024-11-20 17:03:01.623284] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.032 [2024-11-20 17:03:01.623338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.032 [2024-11-20 17:03:01.623354] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.032 [2024-11-20 17:03:01.623372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.032 [2024-11-20 17:03:01.623383] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.032 [2024-11-20 17:03:01.623397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.032 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.032 "name": "Existed_Raid", 00:10:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.032 "strip_size_kb": 64, 00:10:38.032 "state": "configuring", 00:10:38.032 "raid_level": "concat", 00:10:38.032 "superblock": false, 00:10:38.032 "num_base_bdevs": 4, 00:10:38.032 "num_base_bdevs_discovered": 1, 00:10:38.032 "num_base_bdevs_operational": 4, 00:10:38.032 "base_bdevs_list": [ 00:10:38.032 { 00:10:38.032 "name": "BaseBdev1", 00:10:38.032 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:38.032 "is_configured": true, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 65536 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev2", 00:10:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.032 "is_configured": false, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 0 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev3", 00:10:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.032 "is_configured": false, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 0 00:10:38.032 }, 00:10:38.032 { 00:10:38.032 "name": "BaseBdev4", 00:10:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.032 "is_configured": false, 00:10:38.032 "data_offset": 0, 00:10:38.032 "data_size": 0 00:10:38.032 } 00:10:38.032 ] 00:10:38.032 }' 00:10:38.033 17:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.033 17:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.291 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.291 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.291 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 [2024-11-20 17:03:02.175070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.550 BaseBdev2 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.550 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.550 [ 00:10:38.550 { 00:10:38.550 "name": "BaseBdev2", 00:10:38.550 "aliases": [ 00:10:38.550 "194dcb51-b72c-4973-93f6-957e692feb83" 00:10:38.550 ], 00:10:38.550 "product_name": "Malloc disk", 00:10:38.550 "block_size": 512, 00:10:38.550 "num_blocks": 65536, 00:10:38.550 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:38.550 "assigned_rate_limits": { 00:10:38.550 "rw_ios_per_sec": 0, 00:10:38.550 "rw_mbytes_per_sec": 0, 00:10:38.550 "r_mbytes_per_sec": 0, 00:10:38.550 "w_mbytes_per_sec": 0 00:10:38.550 }, 00:10:38.550 "claimed": true, 00:10:38.550 "claim_type": "exclusive_write", 00:10:38.550 "zoned": false, 00:10:38.550 "supported_io_types": { 00:10:38.550 "read": true, 00:10:38.550 "write": true, 00:10:38.550 "unmap": true, 00:10:38.550 "flush": true, 00:10:38.551 "reset": true, 00:10:38.551 "nvme_admin": false, 00:10:38.551 "nvme_io": false, 00:10:38.551 "nvme_io_md": false, 00:10:38.551 "write_zeroes": true, 00:10:38.551 "zcopy": true, 00:10:38.551 "get_zone_info": false, 00:10:38.551 "zone_management": false, 00:10:38.551 "zone_append": false, 00:10:38.551 "compare": false, 00:10:38.551 "compare_and_write": false, 00:10:38.551 "abort": true, 00:10:38.551 "seek_hole": false, 00:10:38.551 "seek_data": false, 00:10:38.551 "copy": true, 00:10:38.551 "nvme_iov_md": false 00:10:38.551 }, 00:10:38.551 "memory_domains": [ 00:10:38.551 { 00:10:38.551 "dma_device_id": "system", 00:10:38.551 "dma_device_type": 1 00:10:38.551 }, 00:10:38.551 { 00:10:38.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.551 "dma_device_type": 2 00:10:38.551 } 00:10:38.551 ], 00:10:38.551 "driver_specific": {} 00:10:38.551 } 00:10:38.551 ] 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.551 "name": "Existed_Raid", 00:10:38.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.551 "strip_size_kb": 64, 00:10:38.551 "state": "configuring", 00:10:38.551 "raid_level": "concat", 00:10:38.551 "superblock": false, 00:10:38.551 "num_base_bdevs": 4, 00:10:38.551 "num_base_bdevs_discovered": 2, 00:10:38.551 "num_base_bdevs_operational": 4, 00:10:38.551 "base_bdevs_list": [ 00:10:38.551 { 00:10:38.551 "name": "BaseBdev1", 00:10:38.551 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:38.551 "is_configured": true, 00:10:38.551 "data_offset": 0, 00:10:38.551 "data_size": 65536 00:10:38.551 }, 00:10:38.551 { 00:10:38.551 "name": "BaseBdev2", 00:10:38.551 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:38.551 "is_configured": true, 00:10:38.551 "data_offset": 0, 00:10:38.551 "data_size": 65536 00:10:38.551 }, 00:10:38.551 { 00:10:38.551 "name": "BaseBdev3", 00:10:38.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.551 "is_configured": false, 00:10:38.551 "data_offset": 0, 00:10:38.551 "data_size": 0 00:10:38.551 }, 00:10:38.551 { 00:10:38.551 "name": "BaseBdev4", 00:10:38.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.551 "is_configured": false, 00:10:38.551 "data_offset": 0, 00:10:38.551 "data_size": 0 00:10:38.551 } 00:10:38.551 ] 00:10:38.551 }' 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.551 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 [2024-11-20 17:03:02.785650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.119 BaseBdev3 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.119 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 [ 00:10:39.120 { 00:10:39.120 "name": "BaseBdev3", 00:10:39.120 "aliases": [ 00:10:39.120 "37f64ded-a5ea-44b2-bc15-e007e5059b4b" 00:10:39.120 ], 00:10:39.120 "product_name": "Malloc disk", 00:10:39.120 "block_size": 512, 00:10:39.120 "num_blocks": 65536, 00:10:39.120 "uuid": "37f64ded-a5ea-44b2-bc15-e007e5059b4b", 00:10:39.120 "assigned_rate_limits": { 00:10:39.120 "rw_ios_per_sec": 0, 00:10:39.120 "rw_mbytes_per_sec": 0, 00:10:39.120 "r_mbytes_per_sec": 0, 00:10:39.120 "w_mbytes_per_sec": 0 00:10:39.120 }, 00:10:39.120 "claimed": true, 00:10:39.120 "claim_type": "exclusive_write", 00:10:39.120 "zoned": false, 00:10:39.120 "supported_io_types": { 00:10:39.120 "read": true, 00:10:39.120 "write": true, 00:10:39.120 "unmap": true, 00:10:39.120 "flush": true, 00:10:39.120 "reset": true, 00:10:39.120 "nvme_admin": false, 00:10:39.120 "nvme_io": false, 00:10:39.120 "nvme_io_md": false, 00:10:39.120 "write_zeroes": true, 00:10:39.120 "zcopy": true, 00:10:39.120 "get_zone_info": false, 00:10:39.120 "zone_management": false, 00:10:39.120 "zone_append": false, 00:10:39.120 "compare": false, 00:10:39.120 "compare_and_write": false, 00:10:39.120 "abort": true, 00:10:39.120 "seek_hole": false, 00:10:39.120 "seek_data": false, 00:10:39.120 "copy": true, 00:10:39.120 "nvme_iov_md": false 00:10:39.120 }, 00:10:39.120 "memory_domains": [ 00:10:39.120 { 00:10:39.120 "dma_device_id": "system", 00:10:39.120 "dma_device_type": 1 00:10:39.120 }, 00:10:39.120 { 00:10:39.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.120 "dma_device_type": 2 00:10:39.120 } 00:10:39.120 ], 00:10:39.120 "driver_specific": {} 00:10:39.120 } 00:10:39.120 ] 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.120 "name": "Existed_Raid", 00:10:39.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.120 "strip_size_kb": 64, 00:10:39.120 "state": "configuring", 00:10:39.120 "raid_level": "concat", 00:10:39.120 "superblock": false, 00:10:39.120 "num_base_bdevs": 4, 00:10:39.120 "num_base_bdevs_discovered": 3, 00:10:39.120 "num_base_bdevs_operational": 4, 00:10:39.120 "base_bdevs_list": [ 00:10:39.120 { 00:10:39.120 "name": "BaseBdev1", 00:10:39.120 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:39.120 "is_configured": true, 00:10:39.120 "data_offset": 0, 00:10:39.120 "data_size": 65536 00:10:39.120 }, 00:10:39.120 { 00:10:39.120 "name": "BaseBdev2", 00:10:39.120 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:39.120 "is_configured": true, 00:10:39.120 "data_offset": 0, 00:10:39.120 "data_size": 65536 00:10:39.120 }, 00:10:39.120 { 00:10:39.120 "name": "BaseBdev3", 00:10:39.120 "uuid": "37f64ded-a5ea-44b2-bc15-e007e5059b4b", 00:10:39.120 "is_configured": true, 00:10:39.120 "data_offset": 0, 00:10:39.120 "data_size": 65536 00:10:39.120 }, 00:10:39.120 { 00:10:39.120 "name": "BaseBdev4", 00:10:39.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.120 "is_configured": false, 00:10:39.120 "data_offset": 0, 00:10:39.120 "data_size": 0 00:10:39.120 } 00:10:39.120 ] 00:10:39.120 }' 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.120 17:03:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 [2024-11-20 17:03:03.373936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.687 [2024-11-20 17:03:03.373987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.687 [2024-11-20 17:03:03.373999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:39.687 [2024-11-20 17:03:03.374312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.687 [2024-11-20 17:03:03.374558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.687 [2024-11-20 17:03:03.374579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.687 [2024-11-20 17:03:03.374956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.687 BaseBdev4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 [ 00:10:39.687 { 00:10:39.687 "name": "BaseBdev4", 00:10:39.687 "aliases": [ 00:10:39.687 "4eab7a24-e564-4d19-a85b-ddbc5c03ddad" 00:10:39.687 ], 00:10:39.687 "product_name": "Malloc disk", 00:10:39.687 "block_size": 512, 00:10:39.687 "num_blocks": 65536, 00:10:39.687 "uuid": "4eab7a24-e564-4d19-a85b-ddbc5c03ddad", 00:10:39.687 "assigned_rate_limits": { 00:10:39.687 "rw_ios_per_sec": 0, 00:10:39.687 "rw_mbytes_per_sec": 0, 00:10:39.687 "r_mbytes_per_sec": 0, 00:10:39.687 "w_mbytes_per_sec": 0 00:10:39.687 }, 00:10:39.687 "claimed": true, 00:10:39.687 "claim_type": "exclusive_write", 00:10:39.687 "zoned": false, 00:10:39.687 "supported_io_types": { 00:10:39.687 "read": true, 00:10:39.687 "write": true, 00:10:39.687 "unmap": true, 00:10:39.687 "flush": true, 00:10:39.687 "reset": true, 00:10:39.687 "nvme_admin": false, 00:10:39.687 "nvme_io": false, 00:10:39.687 "nvme_io_md": false, 00:10:39.687 "write_zeroes": true, 00:10:39.687 "zcopy": true, 00:10:39.687 "get_zone_info": false, 00:10:39.687 "zone_management": false, 00:10:39.687 "zone_append": false, 00:10:39.687 "compare": false, 00:10:39.687 "compare_and_write": false, 00:10:39.687 "abort": true, 00:10:39.687 "seek_hole": false, 00:10:39.687 "seek_data": false, 00:10:39.687 "copy": true, 00:10:39.687 "nvme_iov_md": false 00:10:39.687 }, 00:10:39.687 "memory_domains": [ 00:10:39.687 { 00:10:39.687 "dma_device_id": "system", 00:10:39.687 "dma_device_type": 1 00:10:39.687 }, 00:10:39.687 { 00:10:39.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.687 "dma_device_type": 2 00:10:39.687 } 00:10:39.687 ], 00:10:39.687 "driver_specific": {} 00:10:39.687 } 00:10:39.687 ] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.687 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.687 "name": "Existed_Raid", 00:10:39.687 "uuid": "54dcf6f5-48df-4fa0-8cc6-195bc691af86", 00:10:39.687 "strip_size_kb": 64, 00:10:39.687 "state": "online", 00:10:39.687 "raid_level": "concat", 00:10:39.687 "superblock": false, 00:10:39.687 "num_base_bdevs": 4, 00:10:39.687 "num_base_bdevs_discovered": 4, 00:10:39.687 "num_base_bdevs_operational": 4, 00:10:39.687 "base_bdevs_list": [ 00:10:39.687 { 00:10:39.688 "name": "BaseBdev1", 00:10:39.688 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:39.688 "is_configured": true, 00:10:39.688 "data_offset": 0, 00:10:39.688 "data_size": 65536 00:10:39.688 }, 00:10:39.688 { 00:10:39.688 "name": "BaseBdev2", 00:10:39.688 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:39.688 "is_configured": true, 00:10:39.688 "data_offset": 0, 00:10:39.688 "data_size": 65536 00:10:39.688 }, 00:10:39.688 { 00:10:39.688 "name": "BaseBdev3", 00:10:39.688 "uuid": "37f64ded-a5ea-44b2-bc15-e007e5059b4b", 00:10:39.688 "is_configured": true, 00:10:39.688 "data_offset": 0, 00:10:39.688 "data_size": 65536 00:10:39.688 }, 00:10:39.688 { 00:10:39.688 "name": "BaseBdev4", 00:10:39.688 "uuid": "4eab7a24-e564-4d19-a85b-ddbc5c03ddad", 00:10:39.688 "is_configured": true, 00:10:39.688 "data_offset": 0, 00:10:39.688 "data_size": 65536 00:10:39.688 } 00:10:39.688 ] 00:10:39.688 }' 00:10:39.688 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.688 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.254 [2024-11-20 17:03:03.930542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.254 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.254 "name": "Existed_Raid", 00:10:40.254 "aliases": [ 00:10:40.254 "54dcf6f5-48df-4fa0-8cc6-195bc691af86" 00:10:40.254 ], 00:10:40.254 "product_name": "Raid Volume", 00:10:40.254 "block_size": 512, 00:10:40.254 "num_blocks": 262144, 00:10:40.254 "uuid": "54dcf6f5-48df-4fa0-8cc6-195bc691af86", 00:10:40.254 "assigned_rate_limits": { 00:10:40.254 "rw_ios_per_sec": 0, 00:10:40.254 "rw_mbytes_per_sec": 0, 00:10:40.254 "r_mbytes_per_sec": 0, 00:10:40.254 "w_mbytes_per_sec": 0 00:10:40.254 }, 00:10:40.254 "claimed": false, 00:10:40.254 "zoned": false, 00:10:40.254 "supported_io_types": { 00:10:40.254 "read": true, 00:10:40.254 "write": true, 00:10:40.254 "unmap": true, 00:10:40.254 "flush": true, 00:10:40.254 "reset": true, 00:10:40.254 "nvme_admin": false, 00:10:40.254 "nvme_io": false, 00:10:40.254 "nvme_io_md": false, 00:10:40.254 "write_zeroes": true, 00:10:40.254 "zcopy": false, 00:10:40.254 "get_zone_info": false, 00:10:40.254 "zone_management": false, 00:10:40.254 "zone_append": false, 00:10:40.254 "compare": false, 00:10:40.255 "compare_and_write": false, 00:10:40.255 "abort": false, 00:10:40.255 "seek_hole": false, 00:10:40.255 "seek_data": false, 00:10:40.255 "copy": false, 00:10:40.255 "nvme_iov_md": false 00:10:40.255 }, 00:10:40.255 "memory_domains": [ 00:10:40.255 { 00:10:40.255 "dma_device_id": "system", 00:10:40.255 "dma_device_type": 1 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.255 "dma_device_type": 2 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "system", 00:10:40.255 "dma_device_type": 1 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.255 "dma_device_type": 2 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "system", 00:10:40.255 "dma_device_type": 1 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.255 "dma_device_type": 2 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "system", 00:10:40.255 "dma_device_type": 1 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.255 "dma_device_type": 2 00:10:40.255 } 00:10:40.255 ], 00:10:40.255 "driver_specific": { 00:10:40.255 "raid": { 00:10:40.255 "uuid": "54dcf6f5-48df-4fa0-8cc6-195bc691af86", 00:10:40.255 "strip_size_kb": 64, 00:10:40.255 "state": "online", 00:10:40.255 "raid_level": "concat", 00:10:40.255 "superblock": false, 00:10:40.255 "num_base_bdevs": 4, 00:10:40.255 "num_base_bdevs_discovered": 4, 00:10:40.255 "num_base_bdevs_operational": 4, 00:10:40.255 "base_bdevs_list": [ 00:10:40.255 { 00:10:40.255 "name": "BaseBdev1", 00:10:40.255 "uuid": "597d1b8a-21aa-451f-8130-6106757ed134", 00:10:40.255 "is_configured": true, 00:10:40.255 "data_offset": 0, 00:10:40.255 "data_size": 65536 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "name": "BaseBdev2", 00:10:40.255 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:40.255 "is_configured": true, 00:10:40.255 "data_offset": 0, 00:10:40.255 "data_size": 65536 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "name": "BaseBdev3", 00:10:40.255 "uuid": "37f64ded-a5ea-44b2-bc15-e007e5059b4b", 00:10:40.255 "is_configured": true, 00:10:40.255 "data_offset": 0, 00:10:40.255 "data_size": 65536 00:10:40.255 }, 00:10:40.255 { 00:10:40.255 "name": "BaseBdev4", 00:10:40.255 "uuid": "4eab7a24-e564-4d19-a85b-ddbc5c03ddad", 00:10:40.255 "is_configured": true, 00:10:40.255 "data_offset": 0, 00:10:40.255 "data_size": 65536 00:10:40.255 } 00:10:40.255 ] 00:10:40.255 } 00:10:40.255 } 00:10:40.255 }' 00:10:40.255 17:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.255 BaseBdev2 00:10:40.255 BaseBdev3 00:10:40.255 BaseBdev4' 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.255 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.514 [2024-11-20 17:03:04.298317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.514 [2024-11-20 17:03:04.298352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.514 [2024-11-20 17:03:04.298426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:40.514 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.515 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.773 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.773 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.773 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.773 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.774 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.774 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.774 "name": "Existed_Raid", 00:10:40.774 "uuid": "54dcf6f5-48df-4fa0-8cc6-195bc691af86", 00:10:40.774 "strip_size_kb": 64, 00:10:40.774 "state": "offline", 00:10:40.774 "raid_level": "concat", 00:10:40.774 "superblock": false, 00:10:40.774 "num_base_bdevs": 4, 00:10:40.774 "num_base_bdevs_discovered": 3, 00:10:40.774 "num_base_bdevs_operational": 3, 00:10:40.774 "base_bdevs_list": [ 00:10:40.774 { 00:10:40.774 "name": null, 00:10:40.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.774 "is_configured": false, 00:10:40.774 "data_offset": 0, 00:10:40.774 "data_size": 65536 00:10:40.774 }, 00:10:40.774 { 00:10:40.774 "name": "BaseBdev2", 00:10:40.774 "uuid": "194dcb51-b72c-4973-93f6-957e692feb83", 00:10:40.774 "is_configured": true, 00:10:40.774 "data_offset": 0, 00:10:40.774 "data_size": 65536 00:10:40.774 }, 00:10:40.774 { 00:10:40.774 "name": "BaseBdev3", 00:10:40.774 "uuid": "37f64ded-a5ea-44b2-bc15-e007e5059b4b", 00:10:40.774 "is_configured": true, 00:10:40.774 "data_offset": 0, 00:10:40.774 "data_size": 65536 00:10:40.774 }, 00:10:40.774 { 00:10:40.774 "name": "BaseBdev4", 00:10:40.774 "uuid": "4eab7a24-e564-4d19-a85b-ddbc5c03ddad", 00:10:40.774 "is_configured": true, 00:10:40.774 "data_offset": 0, 00:10:40.774 "data_size": 65536 00:10:40.774 } 00:10:40.774 ] 00:10:40.774 }' 00:10:40.774 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.774 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 17:03:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 [2024-11-20 17:03:04.968288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.341 [2024-11-20 17:03:05.106535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.341 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.342 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.600 [2024-11-20 17:03:05.241839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:41.600 [2024-11-20 17:03:05.241898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.600 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.601 BaseBdev2 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.601 [ 00:10:41.601 { 00:10:41.601 "name": "BaseBdev2", 00:10:41.601 "aliases": [ 00:10:41.601 "eb1e6de7-6b3d-4b34-bab3-14f00857a52a" 00:10:41.601 ], 00:10:41.601 "product_name": "Malloc disk", 00:10:41.601 "block_size": 512, 00:10:41.601 "num_blocks": 65536, 00:10:41.601 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:41.601 "assigned_rate_limits": { 00:10:41.601 "rw_ios_per_sec": 0, 00:10:41.601 "rw_mbytes_per_sec": 0, 00:10:41.601 "r_mbytes_per_sec": 0, 00:10:41.601 "w_mbytes_per_sec": 0 00:10:41.601 }, 00:10:41.601 "claimed": false, 00:10:41.601 "zoned": false, 00:10:41.601 "supported_io_types": { 00:10:41.601 "read": true, 00:10:41.601 "write": true, 00:10:41.601 "unmap": true, 00:10:41.601 "flush": true, 00:10:41.601 "reset": true, 00:10:41.601 "nvme_admin": false, 00:10:41.601 "nvme_io": false, 00:10:41.601 "nvme_io_md": false, 00:10:41.601 "write_zeroes": true, 00:10:41.601 "zcopy": true, 00:10:41.601 "get_zone_info": false, 00:10:41.601 "zone_management": false, 00:10:41.601 "zone_append": false, 00:10:41.601 "compare": false, 00:10:41.601 "compare_and_write": false, 00:10:41.601 "abort": true, 00:10:41.601 "seek_hole": false, 00:10:41.601 "seek_data": false, 00:10:41.601 "copy": true, 00:10:41.601 "nvme_iov_md": false 00:10:41.601 }, 00:10:41.601 "memory_domains": [ 00:10:41.601 { 00:10:41.601 "dma_device_id": "system", 00:10:41.601 "dma_device_type": 1 00:10:41.601 }, 00:10:41.601 { 00:10:41.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.601 "dma_device_type": 2 00:10:41.601 } 00:10:41.601 ], 00:10:41.601 "driver_specific": {} 00:10:41.601 } 00:10:41.601 ] 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.601 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.860 BaseBdev3 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.860 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.860 [ 00:10:41.860 { 00:10:41.860 "name": "BaseBdev3", 00:10:41.860 "aliases": [ 00:10:41.860 "819f8d44-418b-4ec1-b411-434a81ce98dc" 00:10:41.860 ], 00:10:41.860 "product_name": "Malloc disk", 00:10:41.860 "block_size": 512, 00:10:41.860 "num_blocks": 65536, 00:10:41.860 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:41.860 "assigned_rate_limits": { 00:10:41.860 "rw_ios_per_sec": 0, 00:10:41.860 "rw_mbytes_per_sec": 0, 00:10:41.860 "r_mbytes_per_sec": 0, 00:10:41.860 "w_mbytes_per_sec": 0 00:10:41.860 }, 00:10:41.860 "claimed": false, 00:10:41.860 "zoned": false, 00:10:41.860 "supported_io_types": { 00:10:41.860 "read": true, 00:10:41.860 "write": true, 00:10:41.860 "unmap": true, 00:10:41.860 "flush": true, 00:10:41.860 "reset": true, 00:10:41.860 "nvme_admin": false, 00:10:41.860 "nvme_io": false, 00:10:41.860 "nvme_io_md": false, 00:10:41.860 "write_zeroes": true, 00:10:41.860 "zcopy": true, 00:10:41.860 "get_zone_info": false, 00:10:41.860 "zone_management": false, 00:10:41.860 "zone_append": false, 00:10:41.860 "compare": false, 00:10:41.860 "compare_and_write": false, 00:10:41.860 "abort": true, 00:10:41.860 "seek_hole": false, 00:10:41.860 "seek_data": false, 00:10:41.860 "copy": true, 00:10:41.860 "nvme_iov_md": false 00:10:41.860 }, 00:10:41.860 "memory_domains": [ 00:10:41.860 { 00:10:41.861 "dma_device_id": "system", 00:10:41.861 "dma_device_type": 1 00:10:41.861 }, 00:10:41.861 { 00:10:41.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.861 "dma_device_type": 2 00:10:41.861 } 00:10:41.861 ], 00:10:41.861 "driver_specific": {} 00:10:41.861 } 00:10:41.861 ] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 BaseBdev4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 [ 00:10:41.861 { 00:10:41.861 "name": "BaseBdev4", 00:10:41.861 "aliases": [ 00:10:41.861 "a179950b-c3c2-44e8-8ea0-bcd62acaeb40" 00:10:41.861 ], 00:10:41.861 "product_name": "Malloc disk", 00:10:41.861 "block_size": 512, 00:10:41.861 "num_blocks": 65536, 00:10:41.861 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:41.861 "assigned_rate_limits": { 00:10:41.861 "rw_ios_per_sec": 0, 00:10:41.861 "rw_mbytes_per_sec": 0, 00:10:41.861 "r_mbytes_per_sec": 0, 00:10:41.861 "w_mbytes_per_sec": 0 00:10:41.861 }, 00:10:41.861 "claimed": false, 00:10:41.861 "zoned": false, 00:10:41.861 "supported_io_types": { 00:10:41.861 "read": true, 00:10:41.861 "write": true, 00:10:41.861 "unmap": true, 00:10:41.861 "flush": true, 00:10:41.861 "reset": true, 00:10:41.861 "nvme_admin": false, 00:10:41.861 "nvme_io": false, 00:10:41.861 "nvme_io_md": false, 00:10:41.861 "write_zeroes": true, 00:10:41.861 "zcopy": true, 00:10:41.861 "get_zone_info": false, 00:10:41.861 "zone_management": false, 00:10:41.861 "zone_append": false, 00:10:41.861 "compare": false, 00:10:41.861 "compare_and_write": false, 00:10:41.861 "abort": true, 00:10:41.861 "seek_hole": false, 00:10:41.861 "seek_data": false, 00:10:41.861 "copy": true, 00:10:41.861 "nvme_iov_md": false 00:10:41.861 }, 00:10:41.861 "memory_domains": [ 00:10:41.861 { 00:10:41.861 "dma_device_id": "system", 00:10:41.861 "dma_device_type": 1 00:10:41.861 }, 00:10:41.861 { 00:10:41.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.861 "dma_device_type": 2 00:10:41.861 } 00:10:41.861 ], 00:10:41.861 "driver_specific": {} 00:10:41.861 } 00:10:41.861 ] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 [2024-11-20 17:03:05.610046] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.861 [2024-11-20 17:03:05.610271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.861 [2024-11-20 17:03:05.610419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.861 [2024-11-20 17:03:05.613150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.861 [2024-11-20 17:03:05.613375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.861 "name": "Existed_Raid", 00:10:41.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.861 "strip_size_kb": 64, 00:10:41.861 "state": "configuring", 00:10:41.861 "raid_level": "concat", 00:10:41.861 "superblock": false, 00:10:41.861 "num_base_bdevs": 4, 00:10:41.861 "num_base_bdevs_discovered": 3, 00:10:41.861 "num_base_bdevs_operational": 4, 00:10:41.861 "base_bdevs_list": [ 00:10:41.861 { 00:10:41.861 "name": "BaseBdev1", 00:10:41.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.861 "is_configured": false, 00:10:41.861 "data_offset": 0, 00:10:41.861 "data_size": 0 00:10:41.861 }, 00:10:41.861 { 00:10:41.861 "name": "BaseBdev2", 00:10:41.861 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:41.861 "is_configured": true, 00:10:41.861 "data_offset": 0, 00:10:41.861 "data_size": 65536 00:10:41.861 }, 00:10:41.861 { 00:10:41.861 "name": "BaseBdev3", 00:10:41.861 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:41.861 "is_configured": true, 00:10:41.861 "data_offset": 0, 00:10:41.861 "data_size": 65536 00:10:41.861 }, 00:10:41.861 { 00:10:41.861 "name": "BaseBdev4", 00:10:41.861 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:41.861 "is_configured": true, 00:10:41.861 "data_offset": 0, 00:10:41.861 "data_size": 65536 00:10:41.861 } 00:10:41.861 ] 00:10:41.861 }' 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.861 17:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.429 [2024-11-20 17:03:06.146252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.429 "name": "Existed_Raid", 00:10:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.429 "strip_size_kb": 64, 00:10:42.429 "state": "configuring", 00:10:42.429 "raid_level": "concat", 00:10:42.429 "superblock": false, 00:10:42.429 "num_base_bdevs": 4, 00:10:42.429 "num_base_bdevs_discovered": 2, 00:10:42.429 "num_base_bdevs_operational": 4, 00:10:42.429 "base_bdevs_list": [ 00:10:42.429 { 00:10:42.429 "name": "BaseBdev1", 00:10:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.429 "is_configured": false, 00:10:42.429 "data_offset": 0, 00:10:42.429 "data_size": 0 00:10:42.429 }, 00:10:42.429 { 00:10:42.429 "name": null, 00:10:42.429 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:42.429 "is_configured": false, 00:10:42.429 "data_offset": 0, 00:10:42.429 "data_size": 65536 00:10:42.429 }, 00:10:42.429 { 00:10:42.429 "name": "BaseBdev3", 00:10:42.429 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:42.429 "is_configured": true, 00:10:42.429 "data_offset": 0, 00:10:42.429 "data_size": 65536 00:10:42.429 }, 00:10:42.429 { 00:10:42.429 "name": "BaseBdev4", 00:10:42.429 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:42.429 "is_configured": true, 00:10:42.429 "data_offset": 0, 00:10:42.429 "data_size": 65536 00:10:42.429 } 00:10:42.429 ] 00:10:42.429 }' 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.429 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.996 [2024-11-20 17:03:06.742282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.996 BaseBdev1 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.996 [ 00:10:42.996 { 00:10:42.996 "name": "BaseBdev1", 00:10:42.996 "aliases": [ 00:10:42.996 "4beed444-c66a-4317-9114-ecce70f34696" 00:10:42.996 ], 00:10:42.996 "product_name": "Malloc disk", 00:10:42.996 "block_size": 512, 00:10:42.996 "num_blocks": 65536, 00:10:42.996 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:42.996 "assigned_rate_limits": { 00:10:42.996 "rw_ios_per_sec": 0, 00:10:42.996 "rw_mbytes_per_sec": 0, 00:10:42.996 "r_mbytes_per_sec": 0, 00:10:42.996 "w_mbytes_per_sec": 0 00:10:42.996 }, 00:10:42.996 "claimed": true, 00:10:42.996 "claim_type": "exclusive_write", 00:10:42.996 "zoned": false, 00:10:42.996 "supported_io_types": { 00:10:42.996 "read": true, 00:10:42.996 "write": true, 00:10:42.996 "unmap": true, 00:10:42.996 "flush": true, 00:10:42.996 "reset": true, 00:10:42.996 "nvme_admin": false, 00:10:42.996 "nvme_io": false, 00:10:42.996 "nvme_io_md": false, 00:10:42.996 "write_zeroes": true, 00:10:42.996 "zcopy": true, 00:10:42.996 "get_zone_info": false, 00:10:42.996 "zone_management": false, 00:10:42.996 "zone_append": false, 00:10:42.996 "compare": false, 00:10:42.996 "compare_and_write": false, 00:10:42.996 "abort": true, 00:10:42.996 "seek_hole": false, 00:10:42.996 "seek_data": false, 00:10:42.996 "copy": true, 00:10:42.996 "nvme_iov_md": false 00:10:42.996 }, 00:10:42.996 "memory_domains": [ 00:10:42.996 { 00:10:42.996 "dma_device_id": "system", 00:10:42.996 "dma_device_type": 1 00:10:42.996 }, 00:10:42.996 { 00:10:42.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.996 "dma_device_type": 2 00:10:42.996 } 00:10:42.996 ], 00:10:42.996 "driver_specific": {} 00:10:42.996 } 00:10:42.996 ] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.996 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.997 "name": "Existed_Raid", 00:10:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.997 "strip_size_kb": 64, 00:10:42.997 "state": "configuring", 00:10:42.997 "raid_level": "concat", 00:10:42.997 "superblock": false, 00:10:42.997 "num_base_bdevs": 4, 00:10:42.997 "num_base_bdevs_discovered": 3, 00:10:42.997 "num_base_bdevs_operational": 4, 00:10:42.997 "base_bdevs_list": [ 00:10:42.997 { 00:10:42.997 "name": "BaseBdev1", 00:10:42.997 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:42.997 "is_configured": true, 00:10:42.997 "data_offset": 0, 00:10:42.997 "data_size": 65536 00:10:42.997 }, 00:10:42.997 { 00:10:42.997 "name": null, 00:10:42.997 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:42.997 "is_configured": false, 00:10:42.997 "data_offset": 0, 00:10:42.997 "data_size": 65536 00:10:42.997 }, 00:10:42.997 { 00:10:42.997 "name": "BaseBdev3", 00:10:42.997 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:42.997 "is_configured": true, 00:10:42.997 "data_offset": 0, 00:10:42.997 "data_size": 65536 00:10:42.997 }, 00:10:42.997 { 00:10:42.997 "name": "BaseBdev4", 00:10:42.997 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:42.997 "is_configured": true, 00:10:42.997 "data_offset": 0, 00:10:42.997 "data_size": 65536 00:10:42.997 } 00:10:42.997 ] 00:10:42.997 }' 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.997 17:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.563 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.563 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.563 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.563 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.564 [2024-11-20 17:03:07.366550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.564 "name": "Existed_Raid", 00:10:43.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.564 "strip_size_kb": 64, 00:10:43.564 "state": "configuring", 00:10:43.564 "raid_level": "concat", 00:10:43.564 "superblock": false, 00:10:43.564 "num_base_bdevs": 4, 00:10:43.564 "num_base_bdevs_discovered": 2, 00:10:43.564 "num_base_bdevs_operational": 4, 00:10:43.564 "base_bdevs_list": [ 00:10:43.564 { 00:10:43.564 "name": "BaseBdev1", 00:10:43.564 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:43.564 "is_configured": true, 00:10:43.564 "data_offset": 0, 00:10:43.564 "data_size": 65536 00:10:43.564 }, 00:10:43.564 { 00:10:43.564 "name": null, 00:10:43.564 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:43.564 "is_configured": false, 00:10:43.564 "data_offset": 0, 00:10:43.564 "data_size": 65536 00:10:43.564 }, 00:10:43.564 { 00:10:43.564 "name": null, 00:10:43.564 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:43.564 "is_configured": false, 00:10:43.564 "data_offset": 0, 00:10:43.564 "data_size": 65536 00:10:43.564 }, 00:10:43.564 { 00:10:43.564 "name": "BaseBdev4", 00:10:43.564 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:43.564 "is_configured": true, 00:10:43.564 "data_offset": 0, 00:10:43.564 "data_size": 65536 00:10:43.564 } 00:10:43.564 ] 00:10:43.564 }' 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.564 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 [2024-11-20 17:03:07.934661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.131 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.389 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.389 "name": "Existed_Raid", 00:10:44.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.389 "strip_size_kb": 64, 00:10:44.389 "state": "configuring", 00:10:44.389 "raid_level": "concat", 00:10:44.389 "superblock": false, 00:10:44.389 "num_base_bdevs": 4, 00:10:44.389 "num_base_bdevs_discovered": 3, 00:10:44.389 "num_base_bdevs_operational": 4, 00:10:44.389 "base_bdevs_list": [ 00:10:44.389 { 00:10:44.389 "name": "BaseBdev1", 00:10:44.389 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:44.389 "is_configured": true, 00:10:44.389 "data_offset": 0, 00:10:44.389 "data_size": 65536 00:10:44.389 }, 00:10:44.389 { 00:10:44.389 "name": null, 00:10:44.389 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:44.389 "is_configured": false, 00:10:44.389 "data_offset": 0, 00:10:44.389 "data_size": 65536 00:10:44.389 }, 00:10:44.389 { 00:10:44.389 "name": "BaseBdev3", 00:10:44.389 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:44.389 "is_configured": true, 00:10:44.389 "data_offset": 0, 00:10:44.389 "data_size": 65536 00:10:44.389 }, 00:10:44.389 { 00:10:44.389 "name": "BaseBdev4", 00:10:44.389 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:44.389 "is_configured": true, 00:10:44.389 "data_offset": 0, 00:10:44.389 "data_size": 65536 00:10:44.389 } 00:10:44.389 ] 00:10:44.389 }' 00:10:44.389 17:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.390 17:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.648 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.648 [2024-11-20 17:03:08.494862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.907 "name": "Existed_Raid", 00:10:44.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.907 "strip_size_kb": 64, 00:10:44.907 "state": "configuring", 00:10:44.907 "raid_level": "concat", 00:10:44.907 "superblock": false, 00:10:44.907 "num_base_bdevs": 4, 00:10:44.907 "num_base_bdevs_discovered": 2, 00:10:44.907 "num_base_bdevs_operational": 4, 00:10:44.907 "base_bdevs_list": [ 00:10:44.907 { 00:10:44.907 "name": null, 00:10:44.907 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:44.907 "is_configured": false, 00:10:44.907 "data_offset": 0, 00:10:44.907 "data_size": 65536 00:10:44.907 }, 00:10:44.907 { 00:10:44.907 "name": null, 00:10:44.907 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:44.907 "is_configured": false, 00:10:44.907 "data_offset": 0, 00:10:44.907 "data_size": 65536 00:10:44.907 }, 00:10:44.907 { 00:10:44.907 "name": "BaseBdev3", 00:10:44.907 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:44.907 "is_configured": true, 00:10:44.907 "data_offset": 0, 00:10:44.907 "data_size": 65536 00:10:44.907 }, 00:10:44.907 { 00:10:44.907 "name": "BaseBdev4", 00:10:44.907 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:44.907 "is_configured": true, 00:10:44.907 "data_offset": 0, 00:10:44.907 "data_size": 65536 00:10:44.907 } 00:10:44.907 ] 00:10:44.907 }' 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.907 17:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.480 [2024-11-20 17:03:09.159537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.480 "name": "Existed_Raid", 00:10:45.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.480 "strip_size_kb": 64, 00:10:45.480 "state": "configuring", 00:10:45.480 "raid_level": "concat", 00:10:45.480 "superblock": false, 00:10:45.480 "num_base_bdevs": 4, 00:10:45.480 "num_base_bdevs_discovered": 3, 00:10:45.480 "num_base_bdevs_operational": 4, 00:10:45.480 "base_bdevs_list": [ 00:10:45.480 { 00:10:45.480 "name": null, 00:10:45.480 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:45.480 "is_configured": false, 00:10:45.480 "data_offset": 0, 00:10:45.480 "data_size": 65536 00:10:45.480 }, 00:10:45.480 { 00:10:45.480 "name": "BaseBdev2", 00:10:45.480 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:45.480 "is_configured": true, 00:10:45.480 "data_offset": 0, 00:10:45.480 "data_size": 65536 00:10:45.480 }, 00:10:45.480 { 00:10:45.480 "name": "BaseBdev3", 00:10:45.480 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:45.480 "is_configured": true, 00:10:45.480 "data_offset": 0, 00:10:45.480 "data_size": 65536 00:10:45.480 }, 00:10:45.480 { 00:10:45.480 "name": "BaseBdev4", 00:10:45.480 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:45.480 "is_configured": true, 00:10:45.480 "data_offset": 0, 00:10:45.480 "data_size": 65536 00:10:45.480 } 00:10:45.480 ] 00:10:45.480 }' 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.480 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4beed444-c66a-4317-9114-ecce70f34696 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 [2024-11-20 17:03:09.817865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:46.047 [2024-11-20 17:03:09.817922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.047 [2024-11-20 17:03:09.817935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:46.047 [2024-11-20 17:03:09.818265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:46.047 [2024-11-20 17:03:09.818437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.047 [2024-11-20 17:03:09.818456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:46.047 [2024-11-20 17:03:09.818745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.047 NewBaseBdev 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.047 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.047 [ 00:10:46.047 { 00:10:46.047 "name": "NewBaseBdev", 00:10:46.047 "aliases": [ 00:10:46.048 "4beed444-c66a-4317-9114-ecce70f34696" 00:10:46.048 ], 00:10:46.048 "product_name": "Malloc disk", 00:10:46.048 "block_size": 512, 00:10:46.048 "num_blocks": 65536, 00:10:46.048 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:46.048 "assigned_rate_limits": { 00:10:46.048 "rw_ios_per_sec": 0, 00:10:46.048 "rw_mbytes_per_sec": 0, 00:10:46.048 "r_mbytes_per_sec": 0, 00:10:46.048 "w_mbytes_per_sec": 0 00:10:46.048 }, 00:10:46.048 "claimed": true, 00:10:46.048 "claim_type": "exclusive_write", 00:10:46.048 "zoned": false, 00:10:46.048 "supported_io_types": { 00:10:46.048 "read": true, 00:10:46.048 "write": true, 00:10:46.048 "unmap": true, 00:10:46.048 "flush": true, 00:10:46.048 "reset": true, 00:10:46.048 "nvme_admin": false, 00:10:46.048 "nvme_io": false, 00:10:46.048 "nvme_io_md": false, 00:10:46.048 "write_zeroes": true, 00:10:46.048 "zcopy": true, 00:10:46.048 "get_zone_info": false, 00:10:46.048 "zone_management": false, 00:10:46.048 "zone_append": false, 00:10:46.048 "compare": false, 00:10:46.048 "compare_and_write": false, 00:10:46.048 "abort": true, 00:10:46.048 "seek_hole": false, 00:10:46.048 "seek_data": false, 00:10:46.048 "copy": true, 00:10:46.048 "nvme_iov_md": false 00:10:46.048 }, 00:10:46.048 "memory_domains": [ 00:10:46.048 { 00:10:46.048 "dma_device_id": "system", 00:10:46.048 "dma_device_type": 1 00:10:46.048 }, 00:10:46.048 { 00:10:46.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.048 "dma_device_type": 2 00:10:46.048 } 00:10:46.048 ], 00:10:46.048 "driver_specific": {} 00:10:46.048 } 00:10:46.048 ] 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.048 "name": "Existed_Raid", 00:10:46.048 "uuid": "b0b8cb45-4d64-4e51-a516-40efe7ae9098", 00:10:46.048 "strip_size_kb": 64, 00:10:46.048 "state": "online", 00:10:46.048 "raid_level": "concat", 00:10:46.048 "superblock": false, 00:10:46.048 "num_base_bdevs": 4, 00:10:46.048 "num_base_bdevs_discovered": 4, 00:10:46.048 "num_base_bdevs_operational": 4, 00:10:46.048 "base_bdevs_list": [ 00:10:46.048 { 00:10:46.048 "name": "NewBaseBdev", 00:10:46.048 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:46.048 "is_configured": true, 00:10:46.048 "data_offset": 0, 00:10:46.048 "data_size": 65536 00:10:46.048 }, 00:10:46.048 { 00:10:46.048 "name": "BaseBdev2", 00:10:46.048 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:46.048 "is_configured": true, 00:10:46.048 "data_offset": 0, 00:10:46.048 "data_size": 65536 00:10:46.048 }, 00:10:46.048 { 00:10:46.048 "name": "BaseBdev3", 00:10:46.048 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:46.048 "is_configured": true, 00:10:46.048 "data_offset": 0, 00:10:46.048 "data_size": 65536 00:10:46.048 }, 00:10:46.048 { 00:10:46.048 "name": "BaseBdev4", 00:10:46.048 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:46.048 "is_configured": true, 00:10:46.048 "data_offset": 0, 00:10:46.048 "data_size": 65536 00:10:46.048 } 00:10:46.048 ] 00:10:46.048 }' 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.048 17:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.615 [2024-11-20 17:03:10.398548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.615 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.615 "name": "Existed_Raid", 00:10:46.615 "aliases": [ 00:10:46.615 "b0b8cb45-4d64-4e51-a516-40efe7ae9098" 00:10:46.615 ], 00:10:46.615 "product_name": "Raid Volume", 00:10:46.615 "block_size": 512, 00:10:46.615 "num_blocks": 262144, 00:10:46.615 "uuid": "b0b8cb45-4d64-4e51-a516-40efe7ae9098", 00:10:46.615 "assigned_rate_limits": { 00:10:46.615 "rw_ios_per_sec": 0, 00:10:46.615 "rw_mbytes_per_sec": 0, 00:10:46.615 "r_mbytes_per_sec": 0, 00:10:46.615 "w_mbytes_per_sec": 0 00:10:46.615 }, 00:10:46.615 "claimed": false, 00:10:46.615 "zoned": false, 00:10:46.615 "supported_io_types": { 00:10:46.615 "read": true, 00:10:46.615 "write": true, 00:10:46.615 "unmap": true, 00:10:46.615 "flush": true, 00:10:46.615 "reset": true, 00:10:46.615 "nvme_admin": false, 00:10:46.615 "nvme_io": false, 00:10:46.615 "nvme_io_md": false, 00:10:46.615 "write_zeroes": true, 00:10:46.615 "zcopy": false, 00:10:46.615 "get_zone_info": false, 00:10:46.615 "zone_management": false, 00:10:46.615 "zone_append": false, 00:10:46.615 "compare": false, 00:10:46.615 "compare_and_write": false, 00:10:46.615 "abort": false, 00:10:46.615 "seek_hole": false, 00:10:46.615 "seek_data": false, 00:10:46.615 "copy": false, 00:10:46.615 "nvme_iov_md": false 00:10:46.615 }, 00:10:46.615 "memory_domains": [ 00:10:46.615 { 00:10:46.615 "dma_device_id": "system", 00:10:46.615 "dma_device_type": 1 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.615 "dma_device_type": 2 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "system", 00:10:46.615 "dma_device_type": 1 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.615 "dma_device_type": 2 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "system", 00:10:46.615 "dma_device_type": 1 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.615 "dma_device_type": 2 00:10:46.615 }, 00:10:46.615 { 00:10:46.615 "dma_device_id": "system", 00:10:46.616 "dma_device_type": 1 00:10:46.616 }, 00:10:46.616 { 00:10:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.616 "dma_device_type": 2 00:10:46.616 } 00:10:46.616 ], 00:10:46.616 "driver_specific": { 00:10:46.616 "raid": { 00:10:46.616 "uuid": "b0b8cb45-4d64-4e51-a516-40efe7ae9098", 00:10:46.616 "strip_size_kb": 64, 00:10:46.616 "state": "online", 00:10:46.616 "raid_level": "concat", 00:10:46.616 "superblock": false, 00:10:46.616 "num_base_bdevs": 4, 00:10:46.616 "num_base_bdevs_discovered": 4, 00:10:46.616 "num_base_bdevs_operational": 4, 00:10:46.616 "base_bdevs_list": [ 00:10:46.616 { 00:10:46.616 "name": "NewBaseBdev", 00:10:46.616 "uuid": "4beed444-c66a-4317-9114-ecce70f34696", 00:10:46.616 "is_configured": true, 00:10:46.616 "data_offset": 0, 00:10:46.616 "data_size": 65536 00:10:46.616 }, 00:10:46.616 { 00:10:46.616 "name": "BaseBdev2", 00:10:46.616 "uuid": "eb1e6de7-6b3d-4b34-bab3-14f00857a52a", 00:10:46.616 "is_configured": true, 00:10:46.616 "data_offset": 0, 00:10:46.616 "data_size": 65536 00:10:46.616 }, 00:10:46.616 { 00:10:46.616 "name": "BaseBdev3", 00:10:46.616 "uuid": "819f8d44-418b-4ec1-b411-434a81ce98dc", 00:10:46.616 "is_configured": true, 00:10:46.616 "data_offset": 0, 00:10:46.616 "data_size": 65536 00:10:46.616 }, 00:10:46.616 { 00:10:46.616 "name": "BaseBdev4", 00:10:46.616 "uuid": "a179950b-c3c2-44e8-8ea0-bcd62acaeb40", 00:10:46.616 "is_configured": true, 00:10:46.616 "data_offset": 0, 00:10:46.616 "data_size": 65536 00:10:46.616 } 00:10:46.616 ] 00:10:46.616 } 00:10:46.616 } 00:10:46.616 }' 00:10:46.616 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.875 BaseBdev2 00:10:46.875 BaseBdev3 00:10:46.875 BaseBdev4' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.875 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.134 [2024-11-20 17:03:10.754158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.134 [2024-11-20 17:03:10.754224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.134 [2024-11-20 17:03:10.754339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.134 [2024-11-20 17:03:10.754438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.134 [2024-11-20 17:03:10.754454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71211 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71211 ']' 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71211 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71211 00:10:47.134 killing process with pid 71211 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71211' 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71211 00:10:47.134 [2024-11-20 17:03:10.796182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.134 17:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71211 00:10:47.393 [2024-11-20 17:03:11.120165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.329 00:10:48.329 real 0m12.686s 00:10:48.329 user 0m21.180s 00:10:48.329 sys 0m1.700s 00:10:48.329 ************************************ 00:10:48.329 END TEST raid_state_function_test 00:10:48.329 ************************************ 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.329 17:03:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:48.329 17:03:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:48.329 17:03:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.329 17:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.329 ************************************ 00:10:48.329 START TEST raid_state_function_test_sb 00:10:48.329 ************************************ 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.329 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71897 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.330 Process raid pid: 71897 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71897' 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71897 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71897 ']' 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.330 17:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.588 [2024-11-20 17:03:12.264062] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:10:48.588 [2024-11-20 17:03:12.264474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.588 [2024-11-20 17:03:12.449591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.848 [2024-11-20 17:03:12.568421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.107 [2024-11-20 17:03:12.762700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.108 [2024-11-20 17:03:12.763036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.676 [2024-11-20 17:03:13.248788] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.676 [2024-11-20 17:03:13.248874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.676 [2024-11-20 17:03:13.248893] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.676 [2024-11-20 17:03:13.248910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.676 [2024-11-20 17:03:13.248920] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.676 [2024-11-20 17:03:13.248935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.676 [2024-11-20 17:03:13.248945] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.676 [2024-11-20 17:03:13.248959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.676 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.677 "name": "Existed_Raid", 00:10:49.677 "uuid": "33dd75c7-a58a-40e0-a3f0-6833ea9b105f", 00:10:49.677 "strip_size_kb": 64, 00:10:49.677 "state": "configuring", 00:10:49.677 "raid_level": "concat", 00:10:49.677 "superblock": true, 00:10:49.677 "num_base_bdevs": 4, 00:10:49.677 "num_base_bdevs_discovered": 0, 00:10:49.677 "num_base_bdevs_operational": 4, 00:10:49.677 "base_bdevs_list": [ 00:10:49.677 { 00:10:49.677 "name": "BaseBdev1", 00:10:49.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.677 "is_configured": false, 00:10:49.677 "data_offset": 0, 00:10:49.677 "data_size": 0 00:10:49.677 }, 00:10:49.677 { 00:10:49.677 "name": "BaseBdev2", 00:10:49.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.677 "is_configured": false, 00:10:49.677 "data_offset": 0, 00:10:49.677 "data_size": 0 00:10:49.677 }, 00:10:49.677 { 00:10:49.677 "name": "BaseBdev3", 00:10:49.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.677 "is_configured": false, 00:10:49.677 "data_offset": 0, 00:10:49.677 "data_size": 0 00:10:49.677 }, 00:10:49.677 { 00:10:49.677 "name": "BaseBdev4", 00:10:49.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.677 "is_configured": false, 00:10:49.677 "data_offset": 0, 00:10:49.677 "data_size": 0 00:10:49.677 } 00:10:49.677 ] 00:10:49.677 }' 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.677 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.936 [2024-11-20 17:03:13.772895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.936 [2024-11-20 17:03:13.772940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.936 [2024-11-20 17:03:13.780881] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.936 [2024-11-20 17:03:13.780946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.936 [2024-11-20 17:03:13.780961] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.936 [2024-11-20 17:03:13.780977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.936 [2024-11-20 17:03:13.780987] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.936 [2024-11-20 17:03:13.781001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.936 [2024-11-20 17:03:13.781011] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.936 [2024-11-20 17:03:13.781025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.936 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.196 [2024-11-20 17:03:13.826009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.196 BaseBdev1 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.196 [ 00:10:50.196 { 00:10:50.196 "name": "BaseBdev1", 00:10:50.196 "aliases": [ 00:10:50.196 "168aa6d4-ba0a-4bce-acb2-3d7a62377489" 00:10:50.196 ], 00:10:50.196 "product_name": "Malloc disk", 00:10:50.196 "block_size": 512, 00:10:50.196 "num_blocks": 65536, 00:10:50.196 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:50.196 "assigned_rate_limits": { 00:10:50.196 "rw_ios_per_sec": 0, 00:10:50.196 "rw_mbytes_per_sec": 0, 00:10:50.196 "r_mbytes_per_sec": 0, 00:10:50.196 "w_mbytes_per_sec": 0 00:10:50.196 }, 00:10:50.196 "claimed": true, 00:10:50.196 "claim_type": "exclusive_write", 00:10:50.196 "zoned": false, 00:10:50.196 "supported_io_types": { 00:10:50.196 "read": true, 00:10:50.196 "write": true, 00:10:50.196 "unmap": true, 00:10:50.196 "flush": true, 00:10:50.196 "reset": true, 00:10:50.196 "nvme_admin": false, 00:10:50.196 "nvme_io": false, 00:10:50.196 "nvme_io_md": false, 00:10:50.196 "write_zeroes": true, 00:10:50.196 "zcopy": true, 00:10:50.196 "get_zone_info": false, 00:10:50.196 "zone_management": false, 00:10:50.196 "zone_append": false, 00:10:50.196 "compare": false, 00:10:50.196 "compare_and_write": false, 00:10:50.196 "abort": true, 00:10:50.196 "seek_hole": false, 00:10:50.196 "seek_data": false, 00:10:50.196 "copy": true, 00:10:50.196 "nvme_iov_md": false 00:10:50.196 }, 00:10:50.196 "memory_domains": [ 00:10:50.196 { 00:10:50.196 "dma_device_id": "system", 00:10:50.196 "dma_device_type": 1 00:10:50.196 }, 00:10:50.196 { 00:10:50.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.196 "dma_device_type": 2 00:10:50.196 } 00:10:50.196 ], 00:10:50.196 "driver_specific": {} 00:10:50.196 } 00:10:50.196 ] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.196 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.196 "name": "Existed_Raid", 00:10:50.196 "uuid": "0d906ffd-f697-45f7-bc95-8fad43429729", 00:10:50.196 "strip_size_kb": 64, 00:10:50.196 "state": "configuring", 00:10:50.196 "raid_level": "concat", 00:10:50.196 "superblock": true, 00:10:50.196 "num_base_bdevs": 4, 00:10:50.196 "num_base_bdevs_discovered": 1, 00:10:50.196 "num_base_bdevs_operational": 4, 00:10:50.196 "base_bdevs_list": [ 00:10:50.196 { 00:10:50.196 "name": "BaseBdev1", 00:10:50.196 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:50.196 "is_configured": true, 00:10:50.196 "data_offset": 2048, 00:10:50.196 "data_size": 63488 00:10:50.196 }, 00:10:50.196 { 00:10:50.196 "name": "BaseBdev2", 00:10:50.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.196 "is_configured": false, 00:10:50.196 "data_offset": 0, 00:10:50.196 "data_size": 0 00:10:50.196 }, 00:10:50.196 { 00:10:50.196 "name": "BaseBdev3", 00:10:50.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.196 "is_configured": false, 00:10:50.196 "data_offset": 0, 00:10:50.196 "data_size": 0 00:10:50.196 }, 00:10:50.196 { 00:10:50.196 "name": "BaseBdev4", 00:10:50.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.196 "is_configured": false, 00:10:50.196 "data_offset": 0, 00:10:50.196 "data_size": 0 00:10:50.196 } 00:10:50.197 ] 00:10:50.197 }' 00:10:50.197 17:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.197 17:03:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.765 [2024-11-20 17:03:14.370201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.765 [2024-11-20 17:03:14.370411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.765 [2024-11-20 17:03:14.378344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.765 [2024-11-20 17:03:14.380876] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.765 [2024-11-20 17:03:14.380929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.765 [2024-11-20 17:03:14.380945] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.765 [2024-11-20 17:03:14.380964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.765 [2024-11-20 17:03:14.380975] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.765 [2024-11-20 17:03:14.380989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.765 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.765 "name": "Existed_Raid", 00:10:50.765 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:50.765 "strip_size_kb": 64, 00:10:50.765 "state": "configuring", 00:10:50.765 "raid_level": "concat", 00:10:50.765 "superblock": true, 00:10:50.765 "num_base_bdevs": 4, 00:10:50.765 "num_base_bdevs_discovered": 1, 00:10:50.765 "num_base_bdevs_operational": 4, 00:10:50.765 "base_bdevs_list": [ 00:10:50.765 { 00:10:50.765 "name": "BaseBdev1", 00:10:50.765 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:50.765 "is_configured": true, 00:10:50.765 "data_offset": 2048, 00:10:50.765 "data_size": 63488 00:10:50.765 }, 00:10:50.765 { 00:10:50.765 "name": "BaseBdev2", 00:10:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.765 "is_configured": false, 00:10:50.765 "data_offset": 0, 00:10:50.765 "data_size": 0 00:10:50.765 }, 00:10:50.765 { 00:10:50.765 "name": "BaseBdev3", 00:10:50.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.765 "is_configured": false, 00:10:50.765 "data_offset": 0, 00:10:50.765 "data_size": 0 00:10:50.765 }, 00:10:50.765 { 00:10:50.766 "name": "BaseBdev4", 00:10:50.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.766 "is_configured": false, 00:10:50.766 "data_offset": 0, 00:10:50.766 "data_size": 0 00:10:50.766 } 00:10:50.766 ] 00:10:50.766 }' 00:10:50.766 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.766 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.335 [2024-11-20 17:03:14.947987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.335 BaseBdev2 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.335 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.336 [ 00:10:51.336 { 00:10:51.336 "name": "BaseBdev2", 00:10:51.336 "aliases": [ 00:10:51.336 "44d8cea7-9c8b-4428-a9da-a42322751844" 00:10:51.336 ], 00:10:51.336 "product_name": "Malloc disk", 00:10:51.336 "block_size": 512, 00:10:51.336 "num_blocks": 65536, 00:10:51.336 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:51.336 "assigned_rate_limits": { 00:10:51.336 "rw_ios_per_sec": 0, 00:10:51.336 "rw_mbytes_per_sec": 0, 00:10:51.336 "r_mbytes_per_sec": 0, 00:10:51.336 "w_mbytes_per_sec": 0 00:10:51.336 }, 00:10:51.336 "claimed": true, 00:10:51.336 "claim_type": "exclusive_write", 00:10:51.336 "zoned": false, 00:10:51.336 "supported_io_types": { 00:10:51.336 "read": true, 00:10:51.336 "write": true, 00:10:51.336 "unmap": true, 00:10:51.336 "flush": true, 00:10:51.336 "reset": true, 00:10:51.336 "nvme_admin": false, 00:10:51.336 "nvme_io": false, 00:10:51.336 "nvme_io_md": false, 00:10:51.336 "write_zeroes": true, 00:10:51.336 "zcopy": true, 00:10:51.336 "get_zone_info": false, 00:10:51.336 "zone_management": false, 00:10:51.336 "zone_append": false, 00:10:51.336 "compare": false, 00:10:51.336 "compare_and_write": false, 00:10:51.336 "abort": true, 00:10:51.336 "seek_hole": false, 00:10:51.336 "seek_data": false, 00:10:51.336 "copy": true, 00:10:51.336 "nvme_iov_md": false 00:10:51.336 }, 00:10:51.336 "memory_domains": [ 00:10:51.336 { 00:10:51.336 "dma_device_id": "system", 00:10:51.336 "dma_device_type": 1 00:10:51.336 }, 00:10:51.336 { 00:10:51.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.336 "dma_device_type": 2 00:10:51.336 } 00:10:51.336 ], 00:10:51.336 "driver_specific": {} 00:10:51.336 } 00:10:51.336 ] 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.336 17:03:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.336 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.336 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.336 "name": "Existed_Raid", 00:10:51.336 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:51.336 "strip_size_kb": 64, 00:10:51.336 "state": "configuring", 00:10:51.336 "raid_level": "concat", 00:10:51.336 "superblock": true, 00:10:51.336 "num_base_bdevs": 4, 00:10:51.336 "num_base_bdevs_discovered": 2, 00:10:51.336 "num_base_bdevs_operational": 4, 00:10:51.336 "base_bdevs_list": [ 00:10:51.336 { 00:10:51.336 "name": "BaseBdev1", 00:10:51.336 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:51.336 "is_configured": true, 00:10:51.336 "data_offset": 2048, 00:10:51.336 "data_size": 63488 00:10:51.336 }, 00:10:51.336 { 00:10:51.336 "name": "BaseBdev2", 00:10:51.336 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:51.336 "is_configured": true, 00:10:51.336 "data_offset": 2048, 00:10:51.336 "data_size": 63488 00:10:51.336 }, 00:10:51.336 { 00:10:51.336 "name": "BaseBdev3", 00:10:51.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.336 "is_configured": false, 00:10:51.336 "data_offset": 0, 00:10:51.336 "data_size": 0 00:10:51.336 }, 00:10:51.336 { 00:10:51.336 "name": "BaseBdev4", 00:10:51.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.336 "is_configured": false, 00:10:51.336 "data_offset": 0, 00:10:51.336 "data_size": 0 00:10:51.336 } 00:10:51.336 ] 00:10:51.336 }' 00:10:51.336 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.336 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.903 [2024-11-20 17:03:15.573894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.903 BaseBdev3 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.903 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.903 [ 00:10:51.903 { 00:10:51.903 "name": "BaseBdev3", 00:10:51.903 "aliases": [ 00:10:51.903 "9012d52a-1108-4b93-beef-68bfb4e9d93a" 00:10:51.903 ], 00:10:51.903 "product_name": "Malloc disk", 00:10:51.903 "block_size": 512, 00:10:51.903 "num_blocks": 65536, 00:10:51.903 "uuid": "9012d52a-1108-4b93-beef-68bfb4e9d93a", 00:10:51.903 "assigned_rate_limits": { 00:10:51.903 "rw_ios_per_sec": 0, 00:10:51.903 "rw_mbytes_per_sec": 0, 00:10:51.903 "r_mbytes_per_sec": 0, 00:10:51.903 "w_mbytes_per_sec": 0 00:10:51.903 }, 00:10:51.903 "claimed": true, 00:10:51.903 "claim_type": "exclusive_write", 00:10:51.903 "zoned": false, 00:10:51.903 "supported_io_types": { 00:10:51.903 "read": true, 00:10:51.903 "write": true, 00:10:51.903 "unmap": true, 00:10:51.903 "flush": true, 00:10:51.903 "reset": true, 00:10:51.903 "nvme_admin": false, 00:10:51.903 "nvme_io": false, 00:10:51.903 "nvme_io_md": false, 00:10:51.903 "write_zeroes": true, 00:10:51.903 "zcopy": true, 00:10:51.903 "get_zone_info": false, 00:10:51.903 "zone_management": false, 00:10:51.903 "zone_append": false, 00:10:51.903 "compare": false, 00:10:51.903 "compare_and_write": false, 00:10:51.903 "abort": true, 00:10:51.904 "seek_hole": false, 00:10:51.904 "seek_data": false, 00:10:51.904 "copy": true, 00:10:51.904 "nvme_iov_md": false 00:10:51.904 }, 00:10:51.904 "memory_domains": [ 00:10:51.904 { 00:10:51.904 "dma_device_id": "system", 00:10:51.904 "dma_device_type": 1 00:10:51.904 }, 00:10:51.904 { 00:10:51.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.904 "dma_device_type": 2 00:10:51.904 } 00:10:51.904 ], 00:10:51.904 "driver_specific": {} 00:10:51.904 } 00:10:51.904 ] 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.904 "name": "Existed_Raid", 00:10:51.904 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:51.904 "strip_size_kb": 64, 00:10:51.904 "state": "configuring", 00:10:51.904 "raid_level": "concat", 00:10:51.904 "superblock": true, 00:10:51.904 "num_base_bdevs": 4, 00:10:51.904 "num_base_bdevs_discovered": 3, 00:10:51.904 "num_base_bdevs_operational": 4, 00:10:51.904 "base_bdevs_list": [ 00:10:51.904 { 00:10:51.904 "name": "BaseBdev1", 00:10:51.904 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:51.904 "is_configured": true, 00:10:51.904 "data_offset": 2048, 00:10:51.904 "data_size": 63488 00:10:51.904 }, 00:10:51.904 { 00:10:51.904 "name": "BaseBdev2", 00:10:51.904 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:51.904 "is_configured": true, 00:10:51.904 "data_offset": 2048, 00:10:51.904 "data_size": 63488 00:10:51.904 }, 00:10:51.904 { 00:10:51.904 "name": "BaseBdev3", 00:10:51.904 "uuid": "9012d52a-1108-4b93-beef-68bfb4e9d93a", 00:10:51.904 "is_configured": true, 00:10:51.904 "data_offset": 2048, 00:10:51.904 "data_size": 63488 00:10:51.904 }, 00:10:51.904 { 00:10:51.904 "name": "BaseBdev4", 00:10:51.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.904 "is_configured": false, 00:10:51.904 "data_offset": 0, 00:10:51.904 "data_size": 0 00:10:51.904 } 00:10:51.904 ] 00:10:51.904 }' 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.904 17:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.472 [2024-11-20 17:03:16.162685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.472 [2024-11-20 17:03:16.163302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.472 [2024-11-20 17:03:16.163330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.472 BaseBdev4 00:10:52.472 [2024-11-20 17:03:16.163701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.472 [2024-11-20 17:03:16.163930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.472 [2024-11-20 17:03:16.163960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:52.472 [2024-11-20 17:03:16.164137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.472 [ 00:10:52.472 { 00:10:52.472 "name": "BaseBdev4", 00:10:52.472 "aliases": [ 00:10:52.472 "c410b423-9a5a-4c1d-98bf-162ed78bfe3f" 00:10:52.472 ], 00:10:52.472 "product_name": "Malloc disk", 00:10:52.472 "block_size": 512, 00:10:52.472 "num_blocks": 65536, 00:10:52.472 "uuid": "c410b423-9a5a-4c1d-98bf-162ed78bfe3f", 00:10:52.472 "assigned_rate_limits": { 00:10:52.472 "rw_ios_per_sec": 0, 00:10:52.472 "rw_mbytes_per_sec": 0, 00:10:52.472 "r_mbytes_per_sec": 0, 00:10:52.472 "w_mbytes_per_sec": 0 00:10:52.472 }, 00:10:52.472 "claimed": true, 00:10:52.472 "claim_type": "exclusive_write", 00:10:52.472 "zoned": false, 00:10:52.472 "supported_io_types": { 00:10:52.472 "read": true, 00:10:52.472 "write": true, 00:10:52.472 "unmap": true, 00:10:52.472 "flush": true, 00:10:52.472 "reset": true, 00:10:52.472 "nvme_admin": false, 00:10:52.472 "nvme_io": false, 00:10:52.472 "nvme_io_md": false, 00:10:52.472 "write_zeroes": true, 00:10:52.472 "zcopy": true, 00:10:52.472 "get_zone_info": false, 00:10:52.472 "zone_management": false, 00:10:52.472 "zone_append": false, 00:10:52.472 "compare": false, 00:10:52.472 "compare_and_write": false, 00:10:52.472 "abort": true, 00:10:52.472 "seek_hole": false, 00:10:52.472 "seek_data": false, 00:10:52.472 "copy": true, 00:10:52.472 "nvme_iov_md": false 00:10:52.472 }, 00:10:52.472 "memory_domains": [ 00:10:52.472 { 00:10:52.472 "dma_device_id": "system", 00:10:52.472 "dma_device_type": 1 00:10:52.472 }, 00:10:52.472 { 00:10:52.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.472 "dma_device_type": 2 00:10:52.472 } 00:10:52.472 ], 00:10:52.472 "driver_specific": {} 00:10:52.472 } 00:10:52.472 ] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.472 "name": "Existed_Raid", 00:10:52.472 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:52.472 "strip_size_kb": 64, 00:10:52.472 "state": "online", 00:10:52.472 "raid_level": "concat", 00:10:52.472 "superblock": true, 00:10:52.472 "num_base_bdevs": 4, 00:10:52.472 "num_base_bdevs_discovered": 4, 00:10:52.472 "num_base_bdevs_operational": 4, 00:10:52.472 "base_bdevs_list": [ 00:10:52.472 { 00:10:52.472 "name": "BaseBdev1", 00:10:52.472 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:52.472 "is_configured": true, 00:10:52.472 "data_offset": 2048, 00:10:52.472 "data_size": 63488 00:10:52.472 }, 00:10:52.472 { 00:10:52.472 "name": "BaseBdev2", 00:10:52.472 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:52.472 "is_configured": true, 00:10:52.472 "data_offset": 2048, 00:10:52.472 "data_size": 63488 00:10:52.472 }, 00:10:52.472 { 00:10:52.472 "name": "BaseBdev3", 00:10:52.472 "uuid": "9012d52a-1108-4b93-beef-68bfb4e9d93a", 00:10:52.472 "is_configured": true, 00:10:52.472 "data_offset": 2048, 00:10:52.472 "data_size": 63488 00:10:52.472 }, 00:10:52.472 { 00:10:52.472 "name": "BaseBdev4", 00:10:52.472 "uuid": "c410b423-9a5a-4c1d-98bf-162ed78bfe3f", 00:10:52.472 "is_configured": true, 00:10:52.472 "data_offset": 2048, 00:10:52.472 "data_size": 63488 00:10:52.472 } 00:10:52.472 ] 00:10:52.472 }' 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.472 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.041 [2024-11-20 17:03:16.711331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.041 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.041 "name": "Existed_Raid", 00:10:53.041 "aliases": [ 00:10:53.041 "1730b8dd-39a4-44d4-839c-b1cf19ac5a66" 00:10:53.041 ], 00:10:53.041 "product_name": "Raid Volume", 00:10:53.041 "block_size": 512, 00:10:53.041 "num_blocks": 253952, 00:10:53.041 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:53.041 "assigned_rate_limits": { 00:10:53.041 "rw_ios_per_sec": 0, 00:10:53.041 "rw_mbytes_per_sec": 0, 00:10:53.041 "r_mbytes_per_sec": 0, 00:10:53.041 "w_mbytes_per_sec": 0 00:10:53.041 }, 00:10:53.041 "claimed": false, 00:10:53.041 "zoned": false, 00:10:53.041 "supported_io_types": { 00:10:53.041 "read": true, 00:10:53.041 "write": true, 00:10:53.041 "unmap": true, 00:10:53.041 "flush": true, 00:10:53.041 "reset": true, 00:10:53.041 "nvme_admin": false, 00:10:53.041 "nvme_io": false, 00:10:53.041 "nvme_io_md": false, 00:10:53.041 "write_zeroes": true, 00:10:53.041 "zcopy": false, 00:10:53.041 "get_zone_info": false, 00:10:53.041 "zone_management": false, 00:10:53.041 "zone_append": false, 00:10:53.041 "compare": false, 00:10:53.041 "compare_and_write": false, 00:10:53.041 "abort": false, 00:10:53.041 "seek_hole": false, 00:10:53.041 "seek_data": false, 00:10:53.041 "copy": false, 00:10:53.041 "nvme_iov_md": false 00:10:53.041 }, 00:10:53.041 "memory_domains": [ 00:10:53.041 { 00:10:53.041 "dma_device_id": "system", 00:10:53.041 "dma_device_type": 1 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "system", 00:10:53.041 "dma_device_type": 1 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "system", 00:10:53.041 "dma_device_type": 1 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "system", 00:10:53.041 "dma_device_type": 1 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.041 "dma_device_type": 2 00:10:53.041 } 00:10:53.041 ], 00:10:53.041 "driver_specific": { 00:10:53.041 "raid": { 00:10:53.041 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:53.041 "strip_size_kb": 64, 00:10:53.041 "state": "online", 00:10:53.041 "raid_level": "concat", 00:10:53.041 "superblock": true, 00:10:53.041 "num_base_bdevs": 4, 00:10:53.041 "num_base_bdevs_discovered": 4, 00:10:53.041 "num_base_bdevs_operational": 4, 00:10:53.041 "base_bdevs_list": [ 00:10:53.041 { 00:10:53.041 "name": "BaseBdev1", 00:10:53.041 "uuid": "168aa6d4-ba0a-4bce-acb2-3d7a62377489", 00:10:53.041 "is_configured": true, 00:10:53.041 "data_offset": 2048, 00:10:53.041 "data_size": 63488 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "name": "BaseBdev2", 00:10:53.041 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:53.041 "is_configured": true, 00:10:53.041 "data_offset": 2048, 00:10:53.041 "data_size": 63488 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "name": "BaseBdev3", 00:10:53.041 "uuid": "9012d52a-1108-4b93-beef-68bfb4e9d93a", 00:10:53.041 "is_configured": true, 00:10:53.041 "data_offset": 2048, 00:10:53.041 "data_size": 63488 00:10:53.041 }, 00:10:53.041 { 00:10:53.041 "name": "BaseBdev4", 00:10:53.041 "uuid": "c410b423-9a5a-4c1d-98bf-162ed78bfe3f", 00:10:53.041 "is_configured": true, 00:10:53.041 "data_offset": 2048, 00:10:53.041 "data_size": 63488 00:10:53.041 } 00:10:53.041 ] 00:10:53.041 } 00:10:53.042 } 00:10:53.042 }' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:53.042 BaseBdev2 00:10:53.042 BaseBdev3 00:10:53.042 BaseBdev4' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.042 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.301 17:03:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.301 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.301 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.301 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.301 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.301 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.302 [2024-11-20 17:03:17.051073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.302 [2024-11-20 17:03:17.051127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.302 [2024-11-20 17:03:17.051244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.302 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.561 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.561 "name": "Existed_Raid", 00:10:53.561 "uuid": "1730b8dd-39a4-44d4-839c-b1cf19ac5a66", 00:10:53.561 "strip_size_kb": 64, 00:10:53.561 "state": "offline", 00:10:53.561 "raid_level": "concat", 00:10:53.561 "superblock": true, 00:10:53.561 "num_base_bdevs": 4, 00:10:53.561 "num_base_bdevs_discovered": 3, 00:10:53.561 "num_base_bdevs_operational": 3, 00:10:53.562 "base_bdevs_list": [ 00:10:53.562 { 00:10:53.562 "name": null, 00:10:53.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.562 "is_configured": false, 00:10:53.562 "data_offset": 0, 00:10:53.562 "data_size": 63488 00:10:53.562 }, 00:10:53.562 { 00:10:53.562 "name": "BaseBdev2", 00:10:53.562 "uuid": "44d8cea7-9c8b-4428-a9da-a42322751844", 00:10:53.562 "is_configured": true, 00:10:53.562 "data_offset": 2048, 00:10:53.562 "data_size": 63488 00:10:53.562 }, 00:10:53.562 { 00:10:53.562 "name": "BaseBdev3", 00:10:53.562 "uuid": "9012d52a-1108-4b93-beef-68bfb4e9d93a", 00:10:53.562 "is_configured": true, 00:10:53.562 "data_offset": 2048, 00:10:53.562 "data_size": 63488 00:10:53.562 }, 00:10:53.562 { 00:10:53.562 "name": "BaseBdev4", 00:10:53.562 "uuid": "c410b423-9a5a-4c1d-98bf-162ed78bfe3f", 00:10:53.562 "is_configured": true, 00:10:53.562 "data_offset": 2048, 00:10:53.562 "data_size": 63488 00:10:53.562 } 00:10:53.562 ] 00:10:53.562 }' 00:10:53.562 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.562 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.820 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 [2024-11-20 17:03:17.693999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 [2024-11-20 17:03:17.833652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.340 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.340 17:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:54.340 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.340 [2024-11-20 17:03:17.972885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:54.340 [2024-11-20 17:03:17.972942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.340 BaseBdev2 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.340 [ 00:10:54.340 { 00:10:54.340 "name": "BaseBdev2", 00:10:54.340 "aliases": [ 00:10:54.340 "481a4a15-5685-425f-b90f-10f1bc8cebf7" 00:10:54.340 ], 00:10:54.340 "product_name": "Malloc disk", 00:10:54.340 "block_size": 512, 00:10:54.340 "num_blocks": 65536, 00:10:54.340 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:54.340 "assigned_rate_limits": { 00:10:54.340 "rw_ios_per_sec": 0, 00:10:54.340 "rw_mbytes_per_sec": 0, 00:10:54.340 "r_mbytes_per_sec": 0, 00:10:54.340 "w_mbytes_per_sec": 0 00:10:54.340 }, 00:10:54.340 "claimed": false, 00:10:54.340 "zoned": false, 00:10:54.340 "supported_io_types": { 00:10:54.340 "read": true, 00:10:54.340 "write": true, 00:10:54.340 "unmap": true, 00:10:54.340 "flush": true, 00:10:54.340 "reset": true, 00:10:54.340 "nvme_admin": false, 00:10:54.340 "nvme_io": false, 00:10:54.340 "nvme_io_md": false, 00:10:54.340 "write_zeroes": true, 00:10:54.340 "zcopy": true, 00:10:54.340 "get_zone_info": false, 00:10:54.340 "zone_management": false, 00:10:54.340 "zone_append": false, 00:10:54.340 "compare": false, 00:10:54.340 "compare_and_write": false, 00:10:54.340 "abort": true, 00:10:54.340 "seek_hole": false, 00:10:54.340 "seek_data": false, 00:10:54.340 "copy": true, 00:10:54.340 "nvme_iov_md": false 00:10:54.340 }, 00:10:54.340 "memory_domains": [ 00:10:54.340 { 00:10:54.340 "dma_device_id": "system", 00:10:54.340 "dma_device_type": 1 00:10:54.340 }, 00:10:54.340 { 00:10:54.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.340 "dma_device_type": 2 00:10:54.340 } 00:10:54.340 ], 00:10:54.340 "driver_specific": {} 00:10:54.340 } 00:10:54.340 ] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.340 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 BaseBdev3 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 [ 00:10:54.600 { 00:10:54.600 "name": "BaseBdev3", 00:10:54.600 "aliases": [ 00:10:54.600 "25855745-e9af-473b-b13c-22c90f01e596" 00:10:54.600 ], 00:10:54.600 "product_name": "Malloc disk", 00:10:54.600 "block_size": 512, 00:10:54.600 "num_blocks": 65536, 00:10:54.600 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:54.600 "assigned_rate_limits": { 00:10:54.600 "rw_ios_per_sec": 0, 00:10:54.600 "rw_mbytes_per_sec": 0, 00:10:54.600 "r_mbytes_per_sec": 0, 00:10:54.600 "w_mbytes_per_sec": 0 00:10:54.600 }, 00:10:54.600 "claimed": false, 00:10:54.600 "zoned": false, 00:10:54.600 "supported_io_types": { 00:10:54.600 "read": true, 00:10:54.600 "write": true, 00:10:54.600 "unmap": true, 00:10:54.600 "flush": true, 00:10:54.600 "reset": true, 00:10:54.600 "nvme_admin": false, 00:10:54.600 "nvme_io": false, 00:10:54.600 "nvme_io_md": false, 00:10:54.600 "write_zeroes": true, 00:10:54.600 "zcopy": true, 00:10:54.600 "get_zone_info": false, 00:10:54.600 "zone_management": false, 00:10:54.600 "zone_append": false, 00:10:54.600 "compare": false, 00:10:54.600 "compare_and_write": false, 00:10:54.600 "abort": true, 00:10:54.600 "seek_hole": false, 00:10:54.600 "seek_data": false, 00:10:54.600 "copy": true, 00:10:54.600 "nvme_iov_md": false 00:10:54.600 }, 00:10:54.600 "memory_domains": [ 00:10:54.600 { 00:10:54.600 "dma_device_id": "system", 00:10:54.600 "dma_device_type": 1 00:10:54.600 }, 00:10:54.600 { 00:10:54.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.600 "dma_device_type": 2 00:10:54.600 } 00:10:54.600 ], 00:10:54.600 "driver_specific": {} 00:10:54.600 } 00:10:54.600 ] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 BaseBdev4 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.600 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.600 [ 00:10:54.600 { 00:10:54.600 "name": "BaseBdev4", 00:10:54.600 "aliases": [ 00:10:54.600 "448c033b-1a3e-4f8e-b743-9263a2389524" 00:10:54.600 ], 00:10:54.600 "product_name": "Malloc disk", 00:10:54.600 "block_size": 512, 00:10:54.600 "num_blocks": 65536, 00:10:54.600 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:54.600 "assigned_rate_limits": { 00:10:54.600 "rw_ios_per_sec": 0, 00:10:54.600 "rw_mbytes_per_sec": 0, 00:10:54.600 "r_mbytes_per_sec": 0, 00:10:54.600 "w_mbytes_per_sec": 0 00:10:54.600 }, 00:10:54.600 "claimed": false, 00:10:54.600 "zoned": false, 00:10:54.600 "supported_io_types": { 00:10:54.600 "read": true, 00:10:54.600 "write": true, 00:10:54.600 "unmap": true, 00:10:54.600 "flush": true, 00:10:54.600 "reset": true, 00:10:54.600 "nvme_admin": false, 00:10:54.600 "nvme_io": false, 00:10:54.600 "nvme_io_md": false, 00:10:54.600 "write_zeroes": true, 00:10:54.600 "zcopy": true, 00:10:54.600 "get_zone_info": false, 00:10:54.600 "zone_management": false, 00:10:54.600 "zone_append": false, 00:10:54.600 "compare": false, 00:10:54.600 "compare_and_write": false, 00:10:54.600 "abort": true, 00:10:54.600 "seek_hole": false, 00:10:54.600 "seek_data": false, 00:10:54.600 "copy": true, 00:10:54.600 "nvme_iov_md": false 00:10:54.600 }, 00:10:54.600 "memory_domains": [ 00:10:54.600 { 00:10:54.600 "dma_device_id": "system", 00:10:54.600 "dma_device_type": 1 00:10:54.600 }, 00:10:54.600 { 00:10:54.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.601 "dma_device_type": 2 00:10:54.601 } 00:10:54.601 ], 00:10:54.601 "driver_specific": {} 00:10:54.601 } 00:10:54.601 ] 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 [2024-11-20 17:03:18.333445] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.601 [2024-11-20 17:03:18.333627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.601 [2024-11-20 17:03:18.333827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.601 [2024-11-20 17:03:18.336393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.601 [2024-11-20 17:03:18.336597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.601 "name": "Existed_Raid", 00:10:54.601 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:54.601 "strip_size_kb": 64, 00:10:54.601 "state": "configuring", 00:10:54.601 "raid_level": "concat", 00:10:54.601 "superblock": true, 00:10:54.601 "num_base_bdevs": 4, 00:10:54.601 "num_base_bdevs_discovered": 3, 00:10:54.601 "num_base_bdevs_operational": 4, 00:10:54.601 "base_bdevs_list": [ 00:10:54.601 { 00:10:54.601 "name": "BaseBdev1", 00:10:54.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.601 "is_configured": false, 00:10:54.601 "data_offset": 0, 00:10:54.601 "data_size": 0 00:10:54.601 }, 00:10:54.601 { 00:10:54.601 "name": "BaseBdev2", 00:10:54.601 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:54.601 "is_configured": true, 00:10:54.601 "data_offset": 2048, 00:10:54.601 "data_size": 63488 00:10:54.601 }, 00:10:54.601 { 00:10:54.601 "name": "BaseBdev3", 00:10:54.601 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:54.601 "is_configured": true, 00:10:54.601 "data_offset": 2048, 00:10:54.601 "data_size": 63488 00:10:54.601 }, 00:10:54.601 { 00:10:54.601 "name": "BaseBdev4", 00:10:54.601 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:54.601 "is_configured": true, 00:10:54.601 "data_offset": 2048, 00:10:54.601 "data_size": 63488 00:10:54.601 } 00:10:54.601 ] 00:10:54.601 }' 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.601 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.169 [2024-11-20 17:03:18.849617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.169 "name": "Existed_Raid", 00:10:55.169 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:55.169 "strip_size_kb": 64, 00:10:55.169 "state": "configuring", 00:10:55.169 "raid_level": "concat", 00:10:55.169 "superblock": true, 00:10:55.169 "num_base_bdevs": 4, 00:10:55.169 "num_base_bdevs_discovered": 2, 00:10:55.169 "num_base_bdevs_operational": 4, 00:10:55.169 "base_bdevs_list": [ 00:10:55.169 { 00:10:55.169 "name": "BaseBdev1", 00:10:55.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.169 "is_configured": false, 00:10:55.169 "data_offset": 0, 00:10:55.169 "data_size": 0 00:10:55.169 }, 00:10:55.169 { 00:10:55.169 "name": null, 00:10:55.169 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:55.169 "is_configured": false, 00:10:55.169 "data_offset": 0, 00:10:55.169 "data_size": 63488 00:10:55.169 }, 00:10:55.169 { 00:10:55.169 "name": "BaseBdev3", 00:10:55.169 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:55.169 "is_configured": true, 00:10:55.169 "data_offset": 2048, 00:10:55.169 "data_size": 63488 00:10:55.169 }, 00:10:55.169 { 00:10:55.169 "name": "BaseBdev4", 00:10:55.169 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:55.169 "is_configured": true, 00:10:55.169 "data_offset": 2048, 00:10:55.169 "data_size": 63488 00:10:55.169 } 00:10:55.169 ] 00:10:55.169 }' 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.169 17:03:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.736 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.737 [2024-11-20 17:03:19.453365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.737 BaseBdev1 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.737 [ 00:10:55.737 { 00:10:55.737 "name": "BaseBdev1", 00:10:55.737 "aliases": [ 00:10:55.737 "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3" 00:10:55.737 ], 00:10:55.737 "product_name": "Malloc disk", 00:10:55.737 "block_size": 512, 00:10:55.737 "num_blocks": 65536, 00:10:55.737 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:55.737 "assigned_rate_limits": { 00:10:55.737 "rw_ios_per_sec": 0, 00:10:55.737 "rw_mbytes_per_sec": 0, 00:10:55.737 "r_mbytes_per_sec": 0, 00:10:55.737 "w_mbytes_per_sec": 0 00:10:55.737 }, 00:10:55.737 "claimed": true, 00:10:55.737 "claim_type": "exclusive_write", 00:10:55.737 "zoned": false, 00:10:55.737 "supported_io_types": { 00:10:55.737 "read": true, 00:10:55.737 "write": true, 00:10:55.737 "unmap": true, 00:10:55.737 "flush": true, 00:10:55.737 "reset": true, 00:10:55.737 "nvme_admin": false, 00:10:55.737 "nvme_io": false, 00:10:55.737 "nvme_io_md": false, 00:10:55.737 "write_zeroes": true, 00:10:55.737 "zcopy": true, 00:10:55.737 "get_zone_info": false, 00:10:55.737 "zone_management": false, 00:10:55.737 "zone_append": false, 00:10:55.737 "compare": false, 00:10:55.737 "compare_and_write": false, 00:10:55.737 "abort": true, 00:10:55.737 "seek_hole": false, 00:10:55.737 "seek_data": false, 00:10:55.737 "copy": true, 00:10:55.737 "nvme_iov_md": false 00:10:55.737 }, 00:10:55.737 "memory_domains": [ 00:10:55.737 { 00:10:55.737 "dma_device_id": "system", 00:10:55.737 "dma_device_type": 1 00:10:55.737 }, 00:10:55.737 { 00:10:55.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.737 "dma_device_type": 2 00:10:55.737 } 00:10:55.737 ], 00:10:55.737 "driver_specific": {} 00:10:55.737 } 00:10:55.737 ] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.737 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.737 "name": "Existed_Raid", 00:10:55.737 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:55.737 "strip_size_kb": 64, 00:10:55.738 "state": "configuring", 00:10:55.738 "raid_level": "concat", 00:10:55.738 "superblock": true, 00:10:55.738 "num_base_bdevs": 4, 00:10:55.738 "num_base_bdevs_discovered": 3, 00:10:55.738 "num_base_bdevs_operational": 4, 00:10:55.738 "base_bdevs_list": [ 00:10:55.738 { 00:10:55.738 "name": "BaseBdev1", 00:10:55.738 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:55.738 "is_configured": true, 00:10:55.738 "data_offset": 2048, 00:10:55.738 "data_size": 63488 00:10:55.738 }, 00:10:55.738 { 00:10:55.738 "name": null, 00:10:55.738 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:55.738 "is_configured": false, 00:10:55.738 "data_offset": 0, 00:10:55.738 "data_size": 63488 00:10:55.738 }, 00:10:55.738 { 00:10:55.738 "name": "BaseBdev3", 00:10:55.738 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:55.738 "is_configured": true, 00:10:55.738 "data_offset": 2048, 00:10:55.738 "data_size": 63488 00:10:55.738 }, 00:10:55.738 { 00:10:55.738 "name": "BaseBdev4", 00:10:55.738 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:55.738 "is_configured": true, 00:10:55.738 "data_offset": 2048, 00:10:55.738 "data_size": 63488 00:10:55.738 } 00:10:55.738 ] 00:10:55.738 }' 00:10:55.738 17:03:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.738 17:03:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.305 [2024-11-20 17:03:20.069648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.305 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.306 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.306 "name": "Existed_Raid", 00:10:56.306 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:56.306 "strip_size_kb": 64, 00:10:56.306 "state": "configuring", 00:10:56.306 "raid_level": "concat", 00:10:56.306 "superblock": true, 00:10:56.306 "num_base_bdevs": 4, 00:10:56.306 "num_base_bdevs_discovered": 2, 00:10:56.306 "num_base_bdevs_operational": 4, 00:10:56.306 "base_bdevs_list": [ 00:10:56.306 { 00:10:56.306 "name": "BaseBdev1", 00:10:56.306 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:56.306 "is_configured": true, 00:10:56.306 "data_offset": 2048, 00:10:56.306 "data_size": 63488 00:10:56.306 }, 00:10:56.306 { 00:10:56.306 "name": null, 00:10:56.306 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:56.306 "is_configured": false, 00:10:56.306 "data_offset": 0, 00:10:56.306 "data_size": 63488 00:10:56.306 }, 00:10:56.306 { 00:10:56.306 "name": null, 00:10:56.306 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:56.306 "is_configured": false, 00:10:56.306 "data_offset": 0, 00:10:56.306 "data_size": 63488 00:10:56.306 }, 00:10:56.306 { 00:10:56.306 "name": "BaseBdev4", 00:10:56.306 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:56.306 "is_configured": true, 00:10:56.306 "data_offset": 2048, 00:10:56.306 "data_size": 63488 00:10:56.306 } 00:10:56.306 ] 00:10:56.306 }' 00:10:56.306 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.306 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 [2024-11-20 17:03:20.693863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.874 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.132 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.132 "name": "Existed_Raid", 00:10:57.132 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:57.132 "strip_size_kb": 64, 00:10:57.132 "state": "configuring", 00:10:57.132 "raid_level": "concat", 00:10:57.132 "superblock": true, 00:10:57.132 "num_base_bdevs": 4, 00:10:57.132 "num_base_bdevs_discovered": 3, 00:10:57.132 "num_base_bdevs_operational": 4, 00:10:57.132 "base_bdevs_list": [ 00:10:57.132 { 00:10:57.132 "name": "BaseBdev1", 00:10:57.133 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:57.133 "is_configured": true, 00:10:57.133 "data_offset": 2048, 00:10:57.133 "data_size": 63488 00:10:57.133 }, 00:10:57.133 { 00:10:57.133 "name": null, 00:10:57.133 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:57.133 "is_configured": false, 00:10:57.133 "data_offset": 0, 00:10:57.133 "data_size": 63488 00:10:57.133 }, 00:10:57.133 { 00:10:57.133 "name": "BaseBdev3", 00:10:57.133 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:57.133 "is_configured": true, 00:10:57.133 "data_offset": 2048, 00:10:57.133 "data_size": 63488 00:10:57.133 }, 00:10:57.133 { 00:10:57.133 "name": "BaseBdev4", 00:10:57.133 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:57.133 "is_configured": true, 00:10:57.133 "data_offset": 2048, 00:10:57.133 "data_size": 63488 00:10:57.133 } 00:10:57.133 ] 00:10:57.133 }' 00:10:57.133 17:03:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.133 17:03:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.391 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.391 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 [2024-11-20 17:03:21.270129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.650 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.651 "name": "Existed_Raid", 00:10:57.651 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:57.651 "strip_size_kb": 64, 00:10:57.651 "state": "configuring", 00:10:57.651 "raid_level": "concat", 00:10:57.651 "superblock": true, 00:10:57.651 "num_base_bdevs": 4, 00:10:57.651 "num_base_bdevs_discovered": 2, 00:10:57.651 "num_base_bdevs_operational": 4, 00:10:57.651 "base_bdevs_list": [ 00:10:57.651 { 00:10:57.651 "name": null, 00:10:57.651 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:57.651 "is_configured": false, 00:10:57.651 "data_offset": 0, 00:10:57.651 "data_size": 63488 00:10:57.651 }, 00:10:57.651 { 00:10:57.651 "name": null, 00:10:57.651 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:57.651 "is_configured": false, 00:10:57.651 "data_offset": 0, 00:10:57.651 "data_size": 63488 00:10:57.651 }, 00:10:57.651 { 00:10:57.651 "name": "BaseBdev3", 00:10:57.651 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:57.651 "is_configured": true, 00:10:57.651 "data_offset": 2048, 00:10:57.651 "data_size": 63488 00:10:57.651 }, 00:10:57.651 { 00:10:57.651 "name": "BaseBdev4", 00:10:57.651 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:57.651 "is_configured": true, 00:10:57.651 "data_offset": 2048, 00:10:57.651 "data_size": 63488 00:10:57.651 } 00:10:57.651 ] 00:10:57.651 }' 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.651 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.218 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.218 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.218 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.219 [2024-11-20 17:03:21.927761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.219 "name": "Existed_Raid", 00:10:58.219 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:58.219 "strip_size_kb": 64, 00:10:58.219 "state": "configuring", 00:10:58.219 "raid_level": "concat", 00:10:58.219 "superblock": true, 00:10:58.219 "num_base_bdevs": 4, 00:10:58.219 "num_base_bdevs_discovered": 3, 00:10:58.219 "num_base_bdevs_operational": 4, 00:10:58.219 "base_bdevs_list": [ 00:10:58.219 { 00:10:58.219 "name": null, 00:10:58.219 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:58.219 "is_configured": false, 00:10:58.219 "data_offset": 0, 00:10:58.219 "data_size": 63488 00:10:58.219 }, 00:10:58.219 { 00:10:58.219 "name": "BaseBdev2", 00:10:58.219 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:58.219 "is_configured": true, 00:10:58.219 "data_offset": 2048, 00:10:58.219 "data_size": 63488 00:10:58.219 }, 00:10:58.219 { 00:10:58.219 "name": "BaseBdev3", 00:10:58.219 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:58.219 "is_configured": true, 00:10:58.219 "data_offset": 2048, 00:10:58.219 "data_size": 63488 00:10:58.219 }, 00:10:58.219 { 00:10:58.219 "name": "BaseBdev4", 00:10:58.219 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:58.219 "is_configured": true, 00:10:58.219 "data_offset": 2048, 00:10:58.219 "data_size": 63488 00:10:58.219 } 00:10:58.219 ] 00:10:58.219 }' 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.219 17:03:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85e96e5d-e0fd-4486-b6a6-b1edb470b8c3 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 [2024-11-20 17:03:22.596150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:58.787 [2024-11-20 17:03:22.596609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:58.787 [2024-11-20 17:03:22.596634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.787 NewBaseBdev 00:10:58.787 [2024-11-20 17:03:22.597008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:58.787 [2024-11-20 17:03:22.597219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:58.787 [2024-11-20 17:03:22.597261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:58.787 [2024-11-20 17:03:22.597409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 [ 00:10:58.787 { 00:10:58.787 "name": "NewBaseBdev", 00:10:58.787 "aliases": [ 00:10:58.787 "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3" 00:10:58.787 ], 00:10:58.787 "product_name": "Malloc disk", 00:10:58.787 "block_size": 512, 00:10:58.787 "num_blocks": 65536, 00:10:58.787 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:58.787 "assigned_rate_limits": { 00:10:58.787 "rw_ios_per_sec": 0, 00:10:58.787 "rw_mbytes_per_sec": 0, 00:10:58.787 "r_mbytes_per_sec": 0, 00:10:58.787 "w_mbytes_per_sec": 0 00:10:58.787 }, 00:10:58.787 "claimed": true, 00:10:58.787 "claim_type": "exclusive_write", 00:10:58.787 "zoned": false, 00:10:58.787 "supported_io_types": { 00:10:58.787 "read": true, 00:10:58.787 "write": true, 00:10:58.787 "unmap": true, 00:10:58.787 "flush": true, 00:10:58.787 "reset": true, 00:10:58.787 "nvme_admin": false, 00:10:58.787 "nvme_io": false, 00:10:58.787 "nvme_io_md": false, 00:10:58.787 "write_zeroes": true, 00:10:58.787 "zcopy": true, 00:10:58.787 "get_zone_info": false, 00:10:58.787 "zone_management": false, 00:10:58.787 "zone_append": false, 00:10:58.787 "compare": false, 00:10:58.787 "compare_and_write": false, 00:10:58.787 "abort": true, 00:10:58.787 "seek_hole": false, 00:10:58.787 "seek_data": false, 00:10:58.787 "copy": true, 00:10:58.787 "nvme_iov_md": false 00:10:58.787 }, 00:10:58.787 "memory_domains": [ 00:10:58.787 { 00:10:58.787 "dma_device_id": "system", 00:10:58.787 "dma_device_type": 1 00:10:58.787 }, 00:10:58.787 { 00:10:58.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.787 "dma_device_type": 2 00:10:58.787 } 00:10:58.787 ], 00:10:58.787 "driver_specific": {} 00:10:58.787 } 00:10:58.787 ] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.787 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.046 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.046 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.046 "name": "Existed_Raid", 00:10:59.046 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:59.046 "strip_size_kb": 64, 00:10:59.046 "state": "online", 00:10:59.046 "raid_level": "concat", 00:10:59.046 "superblock": true, 00:10:59.046 "num_base_bdevs": 4, 00:10:59.046 "num_base_bdevs_discovered": 4, 00:10:59.046 "num_base_bdevs_operational": 4, 00:10:59.046 "base_bdevs_list": [ 00:10:59.046 { 00:10:59.046 "name": "NewBaseBdev", 00:10:59.046 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:59.046 "is_configured": true, 00:10:59.046 "data_offset": 2048, 00:10:59.046 "data_size": 63488 00:10:59.046 }, 00:10:59.046 { 00:10:59.046 "name": "BaseBdev2", 00:10:59.046 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:59.046 "is_configured": true, 00:10:59.046 "data_offset": 2048, 00:10:59.046 "data_size": 63488 00:10:59.046 }, 00:10:59.046 { 00:10:59.046 "name": "BaseBdev3", 00:10:59.046 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:59.046 "is_configured": true, 00:10:59.046 "data_offset": 2048, 00:10:59.046 "data_size": 63488 00:10:59.046 }, 00:10:59.046 { 00:10:59.046 "name": "BaseBdev4", 00:10:59.046 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:59.046 "is_configured": true, 00:10:59.046 "data_offset": 2048, 00:10:59.046 "data_size": 63488 00:10:59.046 } 00:10:59.046 ] 00:10:59.046 }' 00:10:59.046 17:03:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.046 17:03:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.615 [2024-11-20 17:03:23.208825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.615 "name": "Existed_Raid", 00:10:59.615 "aliases": [ 00:10:59.615 "14af42be-5b9a-48b8-82af-0b95455f4f3e" 00:10:59.615 ], 00:10:59.615 "product_name": "Raid Volume", 00:10:59.615 "block_size": 512, 00:10:59.615 "num_blocks": 253952, 00:10:59.615 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:59.615 "assigned_rate_limits": { 00:10:59.615 "rw_ios_per_sec": 0, 00:10:59.615 "rw_mbytes_per_sec": 0, 00:10:59.615 "r_mbytes_per_sec": 0, 00:10:59.615 "w_mbytes_per_sec": 0 00:10:59.615 }, 00:10:59.615 "claimed": false, 00:10:59.615 "zoned": false, 00:10:59.615 "supported_io_types": { 00:10:59.615 "read": true, 00:10:59.615 "write": true, 00:10:59.615 "unmap": true, 00:10:59.615 "flush": true, 00:10:59.615 "reset": true, 00:10:59.615 "nvme_admin": false, 00:10:59.615 "nvme_io": false, 00:10:59.615 "nvme_io_md": false, 00:10:59.615 "write_zeroes": true, 00:10:59.615 "zcopy": false, 00:10:59.615 "get_zone_info": false, 00:10:59.615 "zone_management": false, 00:10:59.615 "zone_append": false, 00:10:59.615 "compare": false, 00:10:59.615 "compare_and_write": false, 00:10:59.615 "abort": false, 00:10:59.615 "seek_hole": false, 00:10:59.615 "seek_data": false, 00:10:59.615 "copy": false, 00:10:59.615 "nvme_iov_md": false 00:10:59.615 }, 00:10:59.615 "memory_domains": [ 00:10:59.615 { 00:10:59.615 "dma_device_id": "system", 00:10:59.615 "dma_device_type": 1 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.615 "dma_device_type": 2 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "system", 00:10:59.615 "dma_device_type": 1 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.615 "dma_device_type": 2 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "system", 00:10:59.615 "dma_device_type": 1 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.615 "dma_device_type": 2 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "system", 00:10:59.615 "dma_device_type": 1 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.615 "dma_device_type": 2 00:10:59.615 } 00:10:59.615 ], 00:10:59.615 "driver_specific": { 00:10:59.615 "raid": { 00:10:59.615 "uuid": "14af42be-5b9a-48b8-82af-0b95455f4f3e", 00:10:59.615 "strip_size_kb": 64, 00:10:59.615 "state": "online", 00:10:59.615 "raid_level": "concat", 00:10:59.615 "superblock": true, 00:10:59.615 "num_base_bdevs": 4, 00:10:59.615 "num_base_bdevs_discovered": 4, 00:10:59.615 "num_base_bdevs_operational": 4, 00:10:59.615 "base_bdevs_list": [ 00:10:59.615 { 00:10:59.615 "name": "NewBaseBdev", 00:10:59.615 "uuid": "85e96e5d-e0fd-4486-b6a6-b1edb470b8c3", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev2", 00:10:59.615 "uuid": "481a4a15-5685-425f-b90f-10f1bc8cebf7", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev3", 00:10:59.615 "uuid": "25855745-e9af-473b-b13c-22c90f01e596", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 }, 00:10:59.615 { 00:10:59.615 "name": "BaseBdev4", 00:10:59.615 "uuid": "448c033b-1a3e-4f8e-b743-9263a2389524", 00:10:59.615 "is_configured": true, 00:10:59.615 "data_offset": 2048, 00:10:59.615 "data_size": 63488 00:10:59.615 } 00:10:59.615 ] 00:10:59.615 } 00:10:59.615 } 00:10:59.615 }' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.615 BaseBdev2 00:10:59.615 BaseBdev3 00:10:59.615 BaseBdev4' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.615 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.616 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.875 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.875 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.875 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.875 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.876 [2024-11-20 17:03:23.572534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.876 [2024-11-20 17:03:23.572589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.876 [2024-11-20 17:03:23.572682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.876 [2024-11-20 17:03:23.572814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.876 [2024-11-20 17:03:23.572847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71897 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71897 ']' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71897 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71897 00:10:59.876 killing process with pid 71897 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71897' 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71897 00:10:59.876 [2024-11-20 17:03:23.610127] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.876 17:03:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71897 00:11:00.136 [2024-11-20 17:03:23.988512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.515 17:03:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.515 00:11:01.515 real 0m12.995s 00:11:01.515 user 0m21.571s 00:11:01.515 sys 0m1.703s 00:11:01.515 17:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.515 17:03:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.515 ************************************ 00:11:01.515 END TEST raid_state_function_test_sb 00:11:01.515 ************************************ 00:11:01.515 17:03:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:01.515 17:03:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.515 17:03:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.515 17:03:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.515 ************************************ 00:11:01.515 START TEST raid_superblock_test 00:11:01.515 ************************************ 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72578 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72578 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72578 ']' 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.515 17:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.515 [2024-11-20 17:03:25.315732] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:01.515 [2024-11-20 17:03:25.315979] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72578 ] 00:11:01.774 [2024-11-20 17:03:25.507562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.032 [2024-11-20 17:03:25.667840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.032 [2024-11-20 17:03:25.896292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.032 [2024-11-20 17:03:25.896370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 malloc1 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 [2024-11-20 17:03:26.393732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.600 [2024-11-20 17:03:26.393844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.600 [2024-11-20 17:03:26.393892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.600 [2024-11-20 17:03:26.393907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.600 [2024-11-20 17:03:26.396745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.600 [2024-11-20 17:03:26.396806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.600 pt1 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 malloc2 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 [2024-11-20 17:03:26.442456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.600 [2024-11-20 17:03:26.442522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.600 [2024-11-20 17:03:26.442558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:02.600 [2024-11-20 17:03:26.442573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.600 [2024-11-20 17:03:26.445363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.600 [2024-11-20 17:03:26.445408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.600 pt2 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.600 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.859 malloc3 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.859 [2024-11-20 17:03:26.511718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.859 [2024-11-20 17:03:26.511797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.859 [2024-11-20 17:03:26.511832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:02.859 [2024-11-20 17:03:26.511847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.859 [2024-11-20 17:03:26.514694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.859 [2024-11-20 17:03:26.514764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.859 pt3 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.859 malloc4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.859 [2024-11-20 17:03:26.571952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:02.859 [2024-11-20 17:03:26.572145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.859 [2024-11-20 17:03:26.572185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:02.859 [2024-11-20 17:03:26.572200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.859 [2024-11-20 17:03:26.575284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.859 [2024-11-20 17:03:26.575446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:02.859 pt4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.859 [2024-11-20 17:03:26.584191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.859 [2024-11-20 17:03:26.586882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.859 [2024-11-20 17:03:26.587169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.859 [2024-11-20 17:03:26.587314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:02.859 [2024-11-20 17:03:26.587625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:02.859 [2024-11-20 17:03:26.587743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.859 [2024-11-20 17:03:26.588196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.859 [2024-11-20 17:03:26.588568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:02.859 [2024-11-20 17:03:26.588695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:02.859 [2024-11-20 17:03:26.589037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.859 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.860 "name": "raid_bdev1", 00:11:02.860 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:02.860 "strip_size_kb": 64, 00:11:02.860 "state": "online", 00:11:02.860 "raid_level": "concat", 00:11:02.860 "superblock": true, 00:11:02.860 "num_base_bdevs": 4, 00:11:02.860 "num_base_bdevs_discovered": 4, 00:11:02.860 "num_base_bdevs_operational": 4, 00:11:02.860 "base_bdevs_list": [ 00:11:02.860 { 00:11:02.860 "name": "pt1", 00:11:02.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.860 "is_configured": true, 00:11:02.860 "data_offset": 2048, 00:11:02.860 "data_size": 63488 00:11:02.860 }, 00:11:02.860 { 00:11:02.860 "name": "pt2", 00:11:02.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.860 "is_configured": true, 00:11:02.860 "data_offset": 2048, 00:11:02.860 "data_size": 63488 00:11:02.860 }, 00:11:02.860 { 00:11:02.860 "name": "pt3", 00:11:02.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.860 "is_configured": true, 00:11:02.860 "data_offset": 2048, 00:11:02.860 "data_size": 63488 00:11:02.860 }, 00:11:02.860 { 00:11:02.860 "name": "pt4", 00:11:02.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.860 "is_configured": true, 00:11:02.860 "data_offset": 2048, 00:11:02.860 "data_size": 63488 00:11:02.860 } 00:11:02.860 ] 00:11:02.860 }' 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.860 17:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.427 [2024-11-20 17:03:27.137566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.427 "name": "raid_bdev1", 00:11:03.427 "aliases": [ 00:11:03.427 "c4c5ea6c-b0da-47ca-a73a-e083a616a40b" 00:11:03.427 ], 00:11:03.427 "product_name": "Raid Volume", 00:11:03.427 "block_size": 512, 00:11:03.427 "num_blocks": 253952, 00:11:03.427 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:03.427 "assigned_rate_limits": { 00:11:03.427 "rw_ios_per_sec": 0, 00:11:03.427 "rw_mbytes_per_sec": 0, 00:11:03.427 "r_mbytes_per_sec": 0, 00:11:03.427 "w_mbytes_per_sec": 0 00:11:03.427 }, 00:11:03.427 "claimed": false, 00:11:03.427 "zoned": false, 00:11:03.427 "supported_io_types": { 00:11:03.427 "read": true, 00:11:03.427 "write": true, 00:11:03.427 "unmap": true, 00:11:03.427 "flush": true, 00:11:03.427 "reset": true, 00:11:03.427 "nvme_admin": false, 00:11:03.427 "nvme_io": false, 00:11:03.427 "nvme_io_md": false, 00:11:03.427 "write_zeroes": true, 00:11:03.427 "zcopy": false, 00:11:03.427 "get_zone_info": false, 00:11:03.427 "zone_management": false, 00:11:03.427 "zone_append": false, 00:11:03.427 "compare": false, 00:11:03.427 "compare_and_write": false, 00:11:03.427 "abort": false, 00:11:03.427 "seek_hole": false, 00:11:03.427 "seek_data": false, 00:11:03.427 "copy": false, 00:11:03.427 "nvme_iov_md": false 00:11:03.427 }, 00:11:03.427 "memory_domains": [ 00:11:03.427 { 00:11:03.427 "dma_device_id": "system", 00:11:03.427 "dma_device_type": 1 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.427 "dma_device_type": 2 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "system", 00:11:03.427 "dma_device_type": 1 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.427 "dma_device_type": 2 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "system", 00:11:03.427 "dma_device_type": 1 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.427 "dma_device_type": 2 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "system", 00:11:03.427 "dma_device_type": 1 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.427 "dma_device_type": 2 00:11:03.427 } 00:11:03.427 ], 00:11:03.427 "driver_specific": { 00:11:03.427 "raid": { 00:11:03.427 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:03.427 "strip_size_kb": 64, 00:11:03.427 "state": "online", 00:11:03.427 "raid_level": "concat", 00:11:03.427 "superblock": true, 00:11:03.427 "num_base_bdevs": 4, 00:11:03.427 "num_base_bdevs_discovered": 4, 00:11:03.427 "num_base_bdevs_operational": 4, 00:11:03.427 "base_bdevs_list": [ 00:11:03.427 { 00:11:03.427 "name": "pt1", 00:11:03.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.427 "is_configured": true, 00:11:03.427 "data_offset": 2048, 00:11:03.427 "data_size": 63488 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "name": "pt2", 00:11:03.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.427 "is_configured": true, 00:11:03.427 "data_offset": 2048, 00:11:03.427 "data_size": 63488 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "name": "pt3", 00:11:03.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.427 "is_configured": true, 00:11:03.427 "data_offset": 2048, 00:11:03.427 "data_size": 63488 00:11:03.427 }, 00:11:03.427 { 00:11:03.427 "name": "pt4", 00:11:03.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.427 "is_configured": true, 00:11:03.427 "data_offset": 2048, 00:11:03.427 "data_size": 63488 00:11:03.427 } 00:11:03.427 ] 00:11:03.427 } 00:11:03.427 } 00:11:03.427 }' 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.427 pt2 00:11:03.427 pt3 00:11:03.427 pt4' 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.427 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.685 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 [2024-11-20 17:03:27.513634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.686 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c4c5ea6c-b0da-47ca-a73a-e083a616a40b 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c4c5ea6c-b0da-47ca-a73a-e083a616a40b ']' 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 [2024-11-20 17:03:27.565274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.002 [2024-11-20 17:03:27.565423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.002 [2024-11-20 17:03:27.565563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.002 [2024-11-20 17:03:27.565680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.002 [2024-11-20 17:03:27.565705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:04.002 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.003 [2024-11-20 17:03:27.717343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:04.003 [2024-11-20 17:03:27.719953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:04.003 [2024-11-20 17:03:27.720152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:04.003 [2024-11-20 17:03:27.720222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:04.003 [2024-11-20 17:03:27.720324] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:04.003 [2024-11-20 17:03:27.720418] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:04.003 [2024-11-20 17:03:27.720452] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:04.003 [2024-11-20 17:03:27.720484] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:04.003 [2024-11-20 17:03:27.720506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.003 [2024-11-20 17:03:27.720521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:04.003 request: 00:11:04.003 { 00:11:04.003 "name": "raid_bdev1", 00:11:04.003 "raid_level": "concat", 00:11:04.003 "base_bdevs": [ 00:11:04.003 "malloc1", 00:11:04.003 "malloc2", 00:11:04.003 "malloc3", 00:11:04.003 "malloc4" 00:11:04.003 ], 00:11:04.003 "strip_size_kb": 64, 00:11:04.003 "superblock": false, 00:11:04.003 "method": "bdev_raid_create", 00:11:04.003 "req_id": 1 00:11:04.003 } 00:11:04.003 Got JSON-RPC error response 00:11:04.003 response: 00:11:04.003 { 00:11:04.003 "code": -17, 00:11:04.003 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:04.003 } 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.003 [2024-11-20 17:03:27.789359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.003 [2024-11-20 17:03:27.789559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.003 [2024-11-20 17:03:27.789628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.003 [2024-11-20 17:03:27.789737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.003 [2024-11-20 17:03:27.792570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.003 [2024-11-20 17:03:27.792733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.003 [2024-11-20 17:03:27.792935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.003 [2024-11-20 17:03:27.793110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.003 pt1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.003 "name": "raid_bdev1", 00:11:04.003 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:04.003 "strip_size_kb": 64, 00:11:04.003 "state": "configuring", 00:11:04.003 "raid_level": "concat", 00:11:04.003 "superblock": true, 00:11:04.003 "num_base_bdevs": 4, 00:11:04.003 "num_base_bdevs_discovered": 1, 00:11:04.003 "num_base_bdevs_operational": 4, 00:11:04.003 "base_bdevs_list": [ 00:11:04.003 { 00:11:04.003 "name": "pt1", 00:11:04.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.003 "is_configured": true, 00:11:04.003 "data_offset": 2048, 00:11:04.003 "data_size": 63488 00:11:04.003 }, 00:11:04.003 { 00:11:04.003 "name": null, 00:11:04.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.003 "is_configured": false, 00:11:04.003 "data_offset": 2048, 00:11:04.003 "data_size": 63488 00:11:04.003 }, 00:11:04.003 { 00:11:04.003 "name": null, 00:11:04.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.003 "is_configured": false, 00:11:04.003 "data_offset": 2048, 00:11:04.003 "data_size": 63488 00:11:04.003 }, 00:11:04.003 { 00:11:04.003 "name": null, 00:11:04.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.003 "is_configured": false, 00:11:04.003 "data_offset": 2048, 00:11:04.003 "data_size": 63488 00:11:04.003 } 00:11:04.003 ] 00:11:04.003 }' 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.003 17:03:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.570 [2024-11-20 17:03:28.317720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.570 [2024-11-20 17:03:28.317847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.570 [2024-11-20 17:03:28.317876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:04.570 [2024-11-20 17:03:28.317894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.570 [2024-11-20 17:03:28.318463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.570 [2024-11-20 17:03:28.318497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.570 [2024-11-20 17:03:28.318623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.570 [2024-11-20 17:03:28.318658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.570 pt2 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.570 [2024-11-20 17:03:28.325729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.570 "name": "raid_bdev1", 00:11:04.570 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:04.570 "strip_size_kb": 64, 00:11:04.570 "state": "configuring", 00:11:04.570 "raid_level": "concat", 00:11:04.570 "superblock": true, 00:11:04.570 "num_base_bdevs": 4, 00:11:04.570 "num_base_bdevs_discovered": 1, 00:11:04.570 "num_base_bdevs_operational": 4, 00:11:04.570 "base_bdevs_list": [ 00:11:04.570 { 00:11:04.570 "name": "pt1", 00:11:04.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.570 "is_configured": true, 00:11:04.570 "data_offset": 2048, 00:11:04.570 "data_size": 63488 00:11:04.570 }, 00:11:04.570 { 00:11:04.570 "name": null, 00:11:04.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.570 "is_configured": false, 00:11:04.570 "data_offset": 0, 00:11:04.570 "data_size": 63488 00:11:04.570 }, 00:11:04.570 { 00:11:04.570 "name": null, 00:11:04.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.570 "is_configured": false, 00:11:04.570 "data_offset": 2048, 00:11:04.570 "data_size": 63488 00:11:04.570 }, 00:11:04.570 { 00:11:04.570 "name": null, 00:11:04.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.570 "is_configured": false, 00:11:04.570 "data_offset": 2048, 00:11:04.570 "data_size": 63488 00:11:04.570 } 00:11:04.570 ] 00:11:04.570 }' 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.570 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.136 [2024-11-20 17:03:28.866036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.136 [2024-11-20 17:03:28.866107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.136 [2024-11-20 17:03:28.866138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:05.136 [2024-11-20 17:03:28.866152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.136 [2024-11-20 17:03:28.866706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.136 [2024-11-20 17:03:28.866745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.136 [2024-11-20 17:03:28.866893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.136 [2024-11-20 17:03:28.866954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.136 pt2 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.136 [2024-11-20 17:03:28.874021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.136 [2024-11-20 17:03:28.874087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.136 [2024-11-20 17:03:28.874127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:05.136 [2024-11-20 17:03:28.874167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.136 [2024-11-20 17:03:28.874623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.136 [2024-11-20 17:03:28.874654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.136 [2024-11-20 17:03:28.874733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.136 [2024-11-20 17:03:28.874781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.136 pt3 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.136 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.136 [2024-11-20 17:03:28.881972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.136 [2024-11-20 17:03:28.882034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.136 [2024-11-20 17:03:28.882061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:05.136 [2024-11-20 17:03:28.882074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.136 [2024-11-20 17:03:28.882533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.136 [2024-11-20 17:03:28.882563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.136 [2024-11-20 17:03:28.882644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:05.136 [2024-11-20 17:03:28.882677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.137 [2024-11-20 17:03:28.882852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.137 [2024-11-20 17:03:28.882868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.137 [2024-11-20 17:03:28.883170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:05.137 [2024-11-20 17:03:28.883386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.137 [2024-11-20 17:03:28.883408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:05.137 [2024-11-20 17:03:28.883577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.137 pt4 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.137 "name": "raid_bdev1", 00:11:05.137 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:05.137 "strip_size_kb": 64, 00:11:05.137 "state": "online", 00:11:05.137 "raid_level": "concat", 00:11:05.137 "superblock": true, 00:11:05.137 "num_base_bdevs": 4, 00:11:05.137 "num_base_bdevs_discovered": 4, 00:11:05.137 "num_base_bdevs_operational": 4, 00:11:05.137 "base_bdevs_list": [ 00:11:05.137 { 00:11:05.137 "name": "pt1", 00:11:05.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.137 "is_configured": true, 00:11:05.137 "data_offset": 2048, 00:11:05.137 "data_size": 63488 00:11:05.137 }, 00:11:05.137 { 00:11:05.137 "name": "pt2", 00:11:05.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.137 "is_configured": true, 00:11:05.137 "data_offset": 2048, 00:11:05.137 "data_size": 63488 00:11:05.137 }, 00:11:05.137 { 00:11:05.137 "name": "pt3", 00:11:05.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.137 "is_configured": true, 00:11:05.137 "data_offset": 2048, 00:11:05.137 "data_size": 63488 00:11:05.137 }, 00:11:05.137 { 00:11:05.137 "name": "pt4", 00:11:05.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.137 "is_configured": true, 00:11:05.137 "data_offset": 2048, 00:11:05.137 "data_size": 63488 00:11:05.137 } 00:11:05.137 ] 00:11:05.137 }' 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.137 17:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.784 [2024-11-20 17:03:29.414708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.784 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.784 "name": "raid_bdev1", 00:11:05.784 "aliases": [ 00:11:05.784 "c4c5ea6c-b0da-47ca-a73a-e083a616a40b" 00:11:05.784 ], 00:11:05.784 "product_name": "Raid Volume", 00:11:05.784 "block_size": 512, 00:11:05.784 "num_blocks": 253952, 00:11:05.784 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:05.784 "assigned_rate_limits": { 00:11:05.784 "rw_ios_per_sec": 0, 00:11:05.784 "rw_mbytes_per_sec": 0, 00:11:05.784 "r_mbytes_per_sec": 0, 00:11:05.784 "w_mbytes_per_sec": 0 00:11:05.784 }, 00:11:05.784 "claimed": false, 00:11:05.784 "zoned": false, 00:11:05.784 "supported_io_types": { 00:11:05.784 "read": true, 00:11:05.784 "write": true, 00:11:05.784 "unmap": true, 00:11:05.784 "flush": true, 00:11:05.784 "reset": true, 00:11:05.784 "nvme_admin": false, 00:11:05.784 "nvme_io": false, 00:11:05.784 "nvme_io_md": false, 00:11:05.784 "write_zeroes": true, 00:11:05.784 "zcopy": false, 00:11:05.784 "get_zone_info": false, 00:11:05.784 "zone_management": false, 00:11:05.784 "zone_append": false, 00:11:05.784 "compare": false, 00:11:05.784 "compare_and_write": false, 00:11:05.784 "abort": false, 00:11:05.784 "seek_hole": false, 00:11:05.784 "seek_data": false, 00:11:05.784 "copy": false, 00:11:05.784 "nvme_iov_md": false 00:11:05.784 }, 00:11:05.784 "memory_domains": [ 00:11:05.784 { 00:11:05.784 "dma_device_id": "system", 00:11:05.784 "dma_device_type": 1 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.784 "dma_device_type": 2 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "system", 00:11:05.784 "dma_device_type": 1 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.784 "dma_device_type": 2 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "system", 00:11:05.784 "dma_device_type": 1 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.784 "dma_device_type": 2 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "system", 00:11:05.784 "dma_device_type": 1 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.784 "dma_device_type": 2 00:11:05.784 } 00:11:05.784 ], 00:11:05.784 "driver_specific": { 00:11:05.784 "raid": { 00:11:05.784 "uuid": "c4c5ea6c-b0da-47ca-a73a-e083a616a40b", 00:11:05.784 "strip_size_kb": 64, 00:11:05.784 "state": "online", 00:11:05.784 "raid_level": "concat", 00:11:05.784 "superblock": true, 00:11:05.784 "num_base_bdevs": 4, 00:11:05.784 "num_base_bdevs_discovered": 4, 00:11:05.784 "num_base_bdevs_operational": 4, 00:11:05.784 "base_bdevs_list": [ 00:11:05.784 { 00:11:05.784 "name": "pt1", 00:11:05.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.784 "is_configured": true, 00:11:05.784 "data_offset": 2048, 00:11:05.784 "data_size": 63488 00:11:05.784 }, 00:11:05.784 { 00:11:05.784 "name": "pt2", 00:11:05.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.784 "is_configured": true, 00:11:05.784 "data_offset": 2048, 00:11:05.785 "data_size": 63488 00:11:05.785 }, 00:11:05.785 { 00:11:05.785 "name": "pt3", 00:11:05.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.785 "is_configured": true, 00:11:05.785 "data_offset": 2048, 00:11:05.785 "data_size": 63488 00:11:05.785 }, 00:11:05.785 { 00:11:05.785 "name": "pt4", 00:11:05.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.785 "is_configured": true, 00:11:05.785 "data_offset": 2048, 00:11:05.785 "data_size": 63488 00:11:05.785 } 00:11:05.785 ] 00:11:05.785 } 00:11:05.785 } 00:11:05.785 }' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.785 pt2 00:11:05.785 pt3 00:11:05.785 pt4' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.785 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:06.043 [2024-11-20 17:03:29.798739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c4c5ea6c-b0da-47ca-a73a-e083a616a40b '!=' c4c5ea6c-b0da-47ca-a73a-e083a616a40b ']' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72578 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72578 ']' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72578 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72578 00:11:06.043 killing process with pid 72578 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72578' 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72578 00:11:06.043 [2024-11-20 17:03:29.879704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.043 [2024-11-20 17:03:29.879801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.043 17:03:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72578 00:11:06.043 [2024-11-20 17:03:29.879900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.043 [2024-11-20 17:03:29.879932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:06.609 [2024-11-20 17:03:30.254212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.543 17:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:07.543 00:11:07.543 real 0m6.171s 00:11:07.543 user 0m9.229s 00:11:07.543 sys 0m0.919s 00:11:07.543 ************************************ 00:11:07.543 END TEST raid_superblock_test 00:11:07.543 ************************************ 00:11:07.543 17:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.543 17:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.802 17:03:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:07.802 17:03:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.802 17:03:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.802 17:03:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.802 ************************************ 00:11:07.802 START TEST raid_read_error_test 00:11:07.802 ************************************ 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qATVLPdEiU 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72848 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72848 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72848 ']' 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.802 17:03:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.803 [2024-11-20 17:03:31.540070] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:07.803 [2024-11-20 17:03:31.540360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72848 ] 00:11:08.060 [2024-11-20 17:03:31.719496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.061 [2024-11-20 17:03:31.856853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.319 [2024-11-20 17:03:32.075531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.319 [2024-11-20 17:03:32.075601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 BaseBdev1_malloc 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.885 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 true 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 [2024-11-20 17:03:32.565076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.886 [2024-11-20 17:03:32.565142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.886 [2024-11-20 17:03:32.565185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.886 [2024-11-20 17:03:32.565233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.886 [2024-11-20 17:03:32.568528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.886 [2024-11-20 17:03:32.568578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.886 BaseBdev1 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 BaseBdev2_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 true 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 [2024-11-20 17:03:32.628042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.886 [2024-11-20 17:03:32.628234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.886 [2024-11-20 17:03:32.628269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.886 [2024-11-20 17:03:32.628287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.886 [2024-11-20 17:03:32.631571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.886 BaseBdev2 00:11:08.886 [2024-11-20 17:03:32.631740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 BaseBdev3_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 true 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 [2024-11-20 17:03:32.702664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.886 [2024-11-20 17:03:32.702755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.886 [2024-11-20 17:03:32.702815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.886 [2024-11-20 17:03:32.702835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.886 [2024-11-20 17:03:32.705815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.886 [2024-11-20 17:03:32.705860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.886 BaseBdev3 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.886 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 BaseBdev4_malloc 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 true 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 [2024-11-20 17:03:32.767058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:09.144 [2024-11-20 17:03:32.767119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.144 [2024-11-20 17:03:32.767146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.144 [2024-11-20 17:03:32.767163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.144 [2024-11-20 17:03:32.770045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.144 [2024-11-20 17:03:32.770123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.144 BaseBdev4 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.144 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 [2024-11-20 17:03:32.775112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.144 [2024-11-20 17:03:32.777732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.144 [2024-11-20 17:03:32.777827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.144 [2024-11-20 17:03:32.777929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.144 [2024-11-20 17:03:32.778242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:09.144 [2024-11-20 17:03:32.778265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.144 [2024-11-20 17:03:32.778597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:09.144 [2024-11-20 17:03:32.778817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:09.145 [2024-11-20 17:03:32.778859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:09.145 [2024-11-20 17:03:32.779112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.145 "name": "raid_bdev1", 00:11:09.145 "uuid": "dfdab162-d460-4eae-a487-ac0ee65cbaa0", 00:11:09.145 "strip_size_kb": 64, 00:11:09.145 "state": "online", 00:11:09.145 "raid_level": "concat", 00:11:09.145 "superblock": true, 00:11:09.145 "num_base_bdevs": 4, 00:11:09.145 "num_base_bdevs_discovered": 4, 00:11:09.145 "num_base_bdevs_operational": 4, 00:11:09.145 "base_bdevs_list": [ 00:11:09.145 { 00:11:09.145 "name": "BaseBdev1", 00:11:09.145 "uuid": "5954b984-9037-59e7-aeaa-76f904d1bb37", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev2", 00:11:09.145 "uuid": "df681908-acf2-5887-a362-51f1474755ad", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev3", 00:11:09.145 "uuid": "2a0c961a-1ddb-5ddc-8613-10c4b5d96c7b", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev4", 00:11:09.145 "uuid": "93b899df-233f-561a-9989-f799ff040ad9", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 } 00:11:09.145 ] 00:11:09.145 }' 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.145 17:03:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.712 17:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.712 17:03:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.712 [2024-11-20 17:03:33.436710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.646 "name": "raid_bdev1", 00:11:10.646 "uuid": "dfdab162-d460-4eae-a487-ac0ee65cbaa0", 00:11:10.646 "strip_size_kb": 64, 00:11:10.646 "state": "online", 00:11:10.646 "raid_level": "concat", 00:11:10.646 "superblock": true, 00:11:10.646 "num_base_bdevs": 4, 00:11:10.646 "num_base_bdevs_discovered": 4, 00:11:10.646 "num_base_bdevs_operational": 4, 00:11:10.646 "base_bdevs_list": [ 00:11:10.646 { 00:11:10.646 "name": "BaseBdev1", 00:11:10.646 "uuid": "5954b984-9037-59e7-aeaa-76f904d1bb37", 00:11:10.646 "is_configured": true, 00:11:10.646 "data_offset": 2048, 00:11:10.646 "data_size": 63488 00:11:10.646 }, 00:11:10.646 { 00:11:10.646 "name": "BaseBdev2", 00:11:10.646 "uuid": "df681908-acf2-5887-a362-51f1474755ad", 00:11:10.646 "is_configured": true, 00:11:10.646 "data_offset": 2048, 00:11:10.646 "data_size": 63488 00:11:10.646 }, 00:11:10.646 { 00:11:10.646 "name": "BaseBdev3", 00:11:10.646 "uuid": "2a0c961a-1ddb-5ddc-8613-10c4b5d96c7b", 00:11:10.646 "is_configured": true, 00:11:10.646 "data_offset": 2048, 00:11:10.646 "data_size": 63488 00:11:10.646 }, 00:11:10.646 { 00:11:10.646 "name": "BaseBdev4", 00:11:10.646 "uuid": "93b899df-233f-561a-9989-f799ff040ad9", 00:11:10.646 "is_configured": true, 00:11:10.646 "data_offset": 2048, 00:11:10.646 "data_size": 63488 00:11:10.646 } 00:11:10.646 ] 00:11:10.646 }' 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.646 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.213 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.213 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.213 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.213 [2024-11-20 17:03:34.858811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.213 [2024-11-20 17:03:34.858847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.213 [2024-11-20 17:03:34.862306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.213 [2024-11-20 17:03:34.862373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.213 [2024-11-20 17:03:34.862427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.214 [2024-11-20 17:03:34.862446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:11.214 { 00:11:11.214 "results": [ 00:11:11.214 { 00:11:11.214 "job": "raid_bdev1", 00:11:11.214 "core_mask": "0x1", 00:11:11.214 "workload": "randrw", 00:11:11.214 "percentage": 50, 00:11:11.214 "status": "finished", 00:11:11.214 "queue_depth": 1, 00:11:11.214 "io_size": 131072, 00:11:11.214 "runtime": 1.419636, 00:11:11.214 "iops": 11245.136077135267, 00:11:11.214 "mibps": 1405.6420096419083, 00:11:11.214 "io_failed": 1, 00:11:11.214 "io_timeout": 0, 00:11:11.214 "avg_latency_us": 123.56563801497595, 00:11:11.214 "min_latency_us": 36.53818181818182, 00:11:11.214 "max_latency_us": 1846.9236363636364 00:11:11.214 } 00:11:11.214 ], 00:11:11.214 "core_count": 1 00:11:11.214 } 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72848 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72848 ']' 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72848 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72848 00:11:11.214 killing process with pid 72848 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72848' 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72848 00:11:11.214 [2024-11-20 17:03:34.899678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.214 17:03:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72848 00:11:11.472 [2024-11-20 17:03:35.160526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qATVLPdEiU 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:12.409 00:11:12.409 real 0m4.764s 00:11:12.409 user 0m5.885s 00:11:12.409 sys 0m0.604s 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.409 ************************************ 00:11:12.409 END TEST raid_read_error_test 00:11:12.409 ************************************ 00:11:12.409 17:03:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.409 17:03:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:12.409 17:03:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.409 17:03:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.409 17:03:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.409 ************************************ 00:11:12.409 START TEST raid_write_error_test 00:11:12.409 ************************************ 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FisKECNXFv 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72994 00:11:12.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72994 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72994 ']' 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.409 17:03:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.668 [2024-11-20 17:03:36.373906] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:12.668 [2024-11-20 17:03:36.374117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72994 ] 00:11:12.926 [2024-11-20 17:03:36.560660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.926 [2024-11-20 17:03:36.687561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.185 [2024-11-20 17:03:36.888259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.185 [2024-11-20 17:03:36.888297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 BaseBdev1_malloc 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 true 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 [2024-11-20 17:03:37.387335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.759 [2024-11-20 17:03:37.387412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.759 [2024-11-20 17:03:37.387448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.759 [2024-11-20 17:03:37.387483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.759 [2024-11-20 17:03:37.390213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.759 [2024-11-20 17:03:37.390273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.759 BaseBdev1 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 BaseBdev2_malloc 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.759 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 true 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 [2024-11-20 17:03:37.448061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.760 [2024-11-20 17:03:37.448300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.760 [2024-11-20 17:03:37.448331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.760 [2024-11-20 17:03:37.448348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.760 [2024-11-20 17:03:37.450916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.760 [2024-11-20 17:03:37.450960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.760 BaseBdev2 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 BaseBdev3_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 true 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 [2024-11-20 17:03:37.518484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.760 [2024-11-20 17:03:37.518566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.760 [2024-11-20 17:03:37.518590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.760 [2024-11-20 17:03:37.518605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.760 [2024-11-20 17:03:37.521403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.760 [2024-11-20 17:03:37.521447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.760 BaseBdev3 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 BaseBdev4_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 true 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 [2024-11-20 17:03:37.579703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.760 [2024-11-20 17:03:37.579779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.760 [2024-11-20 17:03:37.579807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.760 [2024-11-20 17:03:37.579824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.760 [2024-11-20 17:03:37.582527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.760 [2024-11-20 17:03:37.582590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.760 BaseBdev4 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 [2024-11-20 17:03:37.587822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.760 [2024-11-20 17:03:37.590346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.760 [2024-11-20 17:03:37.590579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.760 [2024-11-20 17:03:37.590718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.760 [2024-11-20 17:03:37.591204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:13.760 [2024-11-20 17:03:37.591265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.760 [2024-11-20 17:03:37.591681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:13.760 [2024-11-20 17:03:37.592092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:13.760 [2024-11-20 17:03:37.592218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:13.760 [2024-11-20 17:03:37.592592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.760 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.021 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.021 "name": "raid_bdev1", 00:11:14.021 "uuid": "77d0b598-2959-4999-95eb-b7a67ce49e1e", 00:11:14.021 "strip_size_kb": 64, 00:11:14.021 "state": "online", 00:11:14.021 "raid_level": "concat", 00:11:14.021 "superblock": true, 00:11:14.021 "num_base_bdevs": 4, 00:11:14.021 "num_base_bdevs_discovered": 4, 00:11:14.021 "num_base_bdevs_operational": 4, 00:11:14.021 "base_bdevs_list": [ 00:11:14.021 { 00:11:14.021 "name": "BaseBdev1", 00:11:14.021 "uuid": "3ff4095a-29d2-58b1-a06d-ab48e1a442a3", 00:11:14.021 "is_configured": true, 00:11:14.021 "data_offset": 2048, 00:11:14.021 "data_size": 63488 00:11:14.021 }, 00:11:14.021 { 00:11:14.021 "name": "BaseBdev2", 00:11:14.021 "uuid": "050f4fbe-554b-5ed8-a055-2adabdb3da7e", 00:11:14.021 "is_configured": true, 00:11:14.021 "data_offset": 2048, 00:11:14.021 "data_size": 63488 00:11:14.021 }, 00:11:14.021 { 00:11:14.021 "name": "BaseBdev3", 00:11:14.021 "uuid": "b5f59c9f-df62-5aaf-a377-c5dd03464d34", 00:11:14.021 "is_configured": true, 00:11:14.021 "data_offset": 2048, 00:11:14.021 "data_size": 63488 00:11:14.021 }, 00:11:14.021 { 00:11:14.021 "name": "BaseBdev4", 00:11:14.021 "uuid": "8f200b86-78c7-5a8b-92d0-6a43adc784e6", 00:11:14.021 "is_configured": true, 00:11:14.021 "data_offset": 2048, 00:11:14.021 "data_size": 63488 00:11:14.021 } 00:11:14.021 ] 00:11:14.021 }' 00:11:14.021 17:03:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.021 17:03:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.279 17:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.279 17:03:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.538 [2024-11-20 17:03:38.218013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.474 "name": "raid_bdev1", 00:11:15.474 "uuid": "77d0b598-2959-4999-95eb-b7a67ce49e1e", 00:11:15.474 "strip_size_kb": 64, 00:11:15.474 "state": "online", 00:11:15.474 "raid_level": "concat", 00:11:15.474 "superblock": true, 00:11:15.474 "num_base_bdevs": 4, 00:11:15.474 "num_base_bdevs_discovered": 4, 00:11:15.474 "num_base_bdevs_operational": 4, 00:11:15.474 "base_bdevs_list": [ 00:11:15.474 { 00:11:15.474 "name": "BaseBdev1", 00:11:15.474 "uuid": "3ff4095a-29d2-58b1-a06d-ab48e1a442a3", 00:11:15.474 "is_configured": true, 00:11:15.474 "data_offset": 2048, 00:11:15.474 "data_size": 63488 00:11:15.474 }, 00:11:15.474 { 00:11:15.474 "name": "BaseBdev2", 00:11:15.474 "uuid": "050f4fbe-554b-5ed8-a055-2adabdb3da7e", 00:11:15.474 "is_configured": true, 00:11:15.474 "data_offset": 2048, 00:11:15.474 "data_size": 63488 00:11:15.474 }, 00:11:15.474 { 00:11:15.474 "name": "BaseBdev3", 00:11:15.474 "uuid": "b5f59c9f-df62-5aaf-a377-c5dd03464d34", 00:11:15.474 "is_configured": true, 00:11:15.474 "data_offset": 2048, 00:11:15.474 "data_size": 63488 00:11:15.474 }, 00:11:15.474 { 00:11:15.474 "name": "BaseBdev4", 00:11:15.474 "uuid": "8f200b86-78c7-5a8b-92d0-6a43adc784e6", 00:11:15.474 "is_configured": true, 00:11:15.474 "data_offset": 2048, 00:11:15.474 "data_size": 63488 00:11:15.474 } 00:11:15.474 ] 00:11:15.474 }' 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.474 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.041 [2024-11-20 17:03:39.643708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.041 [2024-11-20 17:03:39.643746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.041 [2024-11-20 17:03:39.647306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.041 [2024-11-20 17:03:39.647560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.041 [2024-11-20 17:03:39.647666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.041 [2024-11-20 17:03:39.647884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.041 { 00:11:16.041 "results": [ 00:11:16.041 { 00:11:16.041 "job": "raid_bdev1", 00:11:16.041 "core_mask": "0x1", 00:11:16.041 "workload": "randrw", 00:11:16.041 "percentage": 50, 00:11:16.041 "status": "finished", 00:11:16.041 "queue_depth": 1, 00:11:16.041 "io_size": 131072, 00:11:16.041 "runtime": 1.423213, 00:11:16.041 "iops": 11164.175706658103, 00:11:16.041 "mibps": 1395.5219633322629, 00:11:16.041 "io_failed": 1, 00:11:16.041 "io_timeout": 0, 00:11:16.041 "avg_latency_us": 124.70715075233136, 00:11:16.041 "min_latency_us": 36.77090909090909, 00:11:16.041 "max_latency_us": 1787.3454545454545 00:11:16.041 } 00:11:16.041 ], 00:11:16.041 "core_count": 1 00:11:16.041 } 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72994 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72994 ']' 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72994 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72994 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.041 killing process with pid 72994 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72994' 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72994 00:11:16.041 [2024-11-20 17:03:39.684842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.041 17:03:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72994 00:11:16.299 [2024-11-20 17:03:39.943566] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FisKECNXFv 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.234 ************************************ 00:11:17.234 END TEST raid_write_error_test 00:11:17.234 ************************************ 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:17.234 00:11:17.234 real 0m4.734s 00:11:17.234 user 0m5.860s 00:11:17.234 sys 0m0.574s 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.234 17:03:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.234 17:03:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:17.234 17:03:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:17.234 17:03:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:17.234 17:03:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.234 17:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.234 ************************************ 00:11:17.234 START TEST raid_state_function_test 00:11:17.234 ************************************ 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:17.234 Process raid pid: 73138 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73138 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73138' 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73138 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73138 ']' 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.234 17:03:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.493 [2024-11-20 17:03:41.155425] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:17.493 [2024-11-20 17:03:41.155892] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.493 [2024-11-20 17:03:41.340828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.751 [2024-11-20 17:03:41.460581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.009 [2024-11-20 17:03:41.648796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.010 [2024-11-20 17:03:41.648833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 [2024-11-20 17:03:42.097887] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.268 [2024-11-20 17:03:42.097945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.268 [2024-11-20 17:03:42.097961] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.268 [2024-11-20 17:03:42.097977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.268 [2024-11-20 17:03:42.097986] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.268 [2024-11-20 17:03:42.098000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.268 [2024-11-20 17:03:42.098009] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.268 [2024-11-20 17:03:42.098023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.526 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.526 "name": "Existed_Raid", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.526 "strip_size_kb": 0, 00:11:18.526 "state": "configuring", 00:11:18.526 "raid_level": "raid1", 00:11:18.526 "superblock": false, 00:11:18.526 "num_base_bdevs": 4, 00:11:18.526 "num_base_bdevs_discovered": 0, 00:11:18.526 "num_base_bdevs_operational": 4, 00:11:18.526 "base_bdevs_list": [ 00:11:18.526 { 00:11:18.526 "name": "BaseBdev1", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.526 "is_configured": false, 00:11:18.526 "data_offset": 0, 00:11:18.526 "data_size": 0 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "name": "BaseBdev2", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.526 "is_configured": false, 00:11:18.526 "data_offset": 0, 00:11:18.526 "data_size": 0 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "name": "BaseBdev3", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.526 "is_configured": false, 00:11:18.526 "data_offset": 0, 00:11:18.526 "data_size": 0 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "name": "BaseBdev4", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.526 "is_configured": false, 00:11:18.526 "data_offset": 0, 00:11:18.526 "data_size": 0 00:11:18.526 } 00:11:18.526 ] 00:11:18.526 }' 00:11:18.526 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.526 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.785 [2024-11-20 17:03:42.605957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.785 [2024-11-20 17:03:42.606002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.785 [2024-11-20 17:03:42.617975] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.785 [2024-11-20 17:03:42.618153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.785 [2024-11-20 17:03:42.618273] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.785 [2024-11-20 17:03:42.618333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.785 [2024-11-20 17:03:42.618444] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.785 [2024-11-20 17:03:42.618502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.785 [2024-11-20 17:03:42.618542] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.785 [2024-11-20 17:03:42.618680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.785 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.043 [2024-11-20 17:03:42.661742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.043 BaseBdev1 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.043 [ 00:11:19.043 { 00:11:19.043 "name": "BaseBdev1", 00:11:19.043 "aliases": [ 00:11:19.043 "956b872f-76c2-4349-9d74-95c010064c2c" 00:11:19.043 ], 00:11:19.043 "product_name": "Malloc disk", 00:11:19.043 "block_size": 512, 00:11:19.043 "num_blocks": 65536, 00:11:19.043 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:19.043 "assigned_rate_limits": { 00:11:19.043 "rw_ios_per_sec": 0, 00:11:19.043 "rw_mbytes_per_sec": 0, 00:11:19.043 "r_mbytes_per_sec": 0, 00:11:19.043 "w_mbytes_per_sec": 0 00:11:19.043 }, 00:11:19.043 "claimed": true, 00:11:19.043 "claim_type": "exclusive_write", 00:11:19.043 "zoned": false, 00:11:19.043 "supported_io_types": { 00:11:19.043 "read": true, 00:11:19.043 "write": true, 00:11:19.043 "unmap": true, 00:11:19.043 "flush": true, 00:11:19.043 "reset": true, 00:11:19.043 "nvme_admin": false, 00:11:19.043 "nvme_io": false, 00:11:19.043 "nvme_io_md": false, 00:11:19.043 "write_zeroes": true, 00:11:19.043 "zcopy": true, 00:11:19.043 "get_zone_info": false, 00:11:19.043 "zone_management": false, 00:11:19.043 "zone_append": false, 00:11:19.043 "compare": false, 00:11:19.043 "compare_and_write": false, 00:11:19.043 "abort": true, 00:11:19.043 "seek_hole": false, 00:11:19.043 "seek_data": false, 00:11:19.043 "copy": true, 00:11:19.043 "nvme_iov_md": false 00:11:19.043 }, 00:11:19.043 "memory_domains": [ 00:11:19.043 { 00:11:19.043 "dma_device_id": "system", 00:11:19.043 "dma_device_type": 1 00:11:19.043 }, 00:11:19.043 { 00:11:19.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.043 "dma_device_type": 2 00:11:19.043 } 00:11:19.043 ], 00:11:19.043 "driver_specific": {} 00:11:19.043 } 00:11:19.043 ] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.043 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.043 "name": "Existed_Raid", 00:11:19.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.043 "strip_size_kb": 0, 00:11:19.043 "state": "configuring", 00:11:19.043 "raid_level": "raid1", 00:11:19.044 "superblock": false, 00:11:19.044 "num_base_bdevs": 4, 00:11:19.044 "num_base_bdevs_discovered": 1, 00:11:19.044 "num_base_bdevs_operational": 4, 00:11:19.044 "base_bdevs_list": [ 00:11:19.044 { 00:11:19.044 "name": "BaseBdev1", 00:11:19.044 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:19.044 "is_configured": true, 00:11:19.044 "data_offset": 0, 00:11:19.044 "data_size": 65536 00:11:19.044 }, 00:11:19.044 { 00:11:19.044 "name": "BaseBdev2", 00:11:19.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.044 "is_configured": false, 00:11:19.044 "data_offset": 0, 00:11:19.044 "data_size": 0 00:11:19.044 }, 00:11:19.044 { 00:11:19.044 "name": "BaseBdev3", 00:11:19.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.044 "is_configured": false, 00:11:19.044 "data_offset": 0, 00:11:19.044 "data_size": 0 00:11:19.044 }, 00:11:19.044 { 00:11:19.044 "name": "BaseBdev4", 00:11:19.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.044 "is_configured": false, 00:11:19.044 "data_offset": 0, 00:11:19.044 "data_size": 0 00:11:19.044 } 00:11:19.044 ] 00:11:19.044 }' 00:11:19.044 17:03:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.044 17:03:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.611 [2024-11-20 17:03:43.193986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.611 [2024-11-20 17:03:43.194195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.611 [2024-11-20 17:03:43.206017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.611 [2024-11-20 17:03:43.208540] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.611 [2024-11-20 17:03:43.208739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.611 [2024-11-20 17:03:43.208906] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.611 [2024-11-20 17:03:43.208942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.611 [2024-11-20 17:03:43.208954] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.611 [2024-11-20 17:03:43.208969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.611 "name": "Existed_Raid", 00:11:19.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.611 "strip_size_kb": 0, 00:11:19.611 "state": "configuring", 00:11:19.611 "raid_level": "raid1", 00:11:19.611 "superblock": false, 00:11:19.611 "num_base_bdevs": 4, 00:11:19.611 "num_base_bdevs_discovered": 1, 00:11:19.611 "num_base_bdevs_operational": 4, 00:11:19.611 "base_bdevs_list": [ 00:11:19.611 { 00:11:19.611 "name": "BaseBdev1", 00:11:19.611 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:19.611 "is_configured": true, 00:11:19.611 "data_offset": 0, 00:11:19.611 "data_size": 65536 00:11:19.611 }, 00:11:19.611 { 00:11:19.611 "name": "BaseBdev2", 00:11:19.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.611 "is_configured": false, 00:11:19.611 "data_offset": 0, 00:11:19.611 "data_size": 0 00:11:19.611 }, 00:11:19.611 { 00:11:19.611 "name": "BaseBdev3", 00:11:19.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.611 "is_configured": false, 00:11:19.611 "data_offset": 0, 00:11:19.611 "data_size": 0 00:11:19.611 }, 00:11:19.611 { 00:11:19.611 "name": "BaseBdev4", 00:11:19.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.611 "is_configured": false, 00:11:19.611 "data_offset": 0, 00:11:19.611 "data_size": 0 00:11:19.611 } 00:11:19.611 ] 00:11:19.611 }' 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.611 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.882 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.882 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.882 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 [2024-11-20 17:03:43.760580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.151 BaseBdev2 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 [ 00:11:20.151 { 00:11:20.151 "name": "BaseBdev2", 00:11:20.151 "aliases": [ 00:11:20.151 "aed0d493-5831-4a42-9332-f336ac7ba484" 00:11:20.151 ], 00:11:20.151 "product_name": "Malloc disk", 00:11:20.151 "block_size": 512, 00:11:20.151 "num_blocks": 65536, 00:11:20.151 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:20.151 "assigned_rate_limits": { 00:11:20.151 "rw_ios_per_sec": 0, 00:11:20.151 "rw_mbytes_per_sec": 0, 00:11:20.151 "r_mbytes_per_sec": 0, 00:11:20.151 "w_mbytes_per_sec": 0 00:11:20.151 }, 00:11:20.151 "claimed": true, 00:11:20.151 "claim_type": "exclusive_write", 00:11:20.151 "zoned": false, 00:11:20.151 "supported_io_types": { 00:11:20.151 "read": true, 00:11:20.151 "write": true, 00:11:20.151 "unmap": true, 00:11:20.151 "flush": true, 00:11:20.151 "reset": true, 00:11:20.151 "nvme_admin": false, 00:11:20.151 "nvme_io": false, 00:11:20.151 "nvme_io_md": false, 00:11:20.151 "write_zeroes": true, 00:11:20.151 "zcopy": true, 00:11:20.151 "get_zone_info": false, 00:11:20.151 "zone_management": false, 00:11:20.151 "zone_append": false, 00:11:20.151 "compare": false, 00:11:20.151 "compare_and_write": false, 00:11:20.151 "abort": true, 00:11:20.151 "seek_hole": false, 00:11:20.151 "seek_data": false, 00:11:20.151 "copy": true, 00:11:20.151 "nvme_iov_md": false 00:11:20.151 }, 00:11:20.151 "memory_domains": [ 00:11:20.151 { 00:11:20.151 "dma_device_id": "system", 00:11:20.151 "dma_device_type": 1 00:11:20.151 }, 00:11:20.151 { 00:11:20.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.151 "dma_device_type": 2 00:11:20.151 } 00:11:20.151 ], 00:11:20.151 "driver_specific": {} 00:11:20.151 } 00:11:20.151 ] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.151 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.151 "name": "Existed_Raid", 00:11:20.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.151 "strip_size_kb": 0, 00:11:20.151 "state": "configuring", 00:11:20.151 "raid_level": "raid1", 00:11:20.151 "superblock": false, 00:11:20.151 "num_base_bdevs": 4, 00:11:20.151 "num_base_bdevs_discovered": 2, 00:11:20.151 "num_base_bdevs_operational": 4, 00:11:20.151 "base_bdevs_list": [ 00:11:20.151 { 00:11:20.151 "name": "BaseBdev1", 00:11:20.151 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:20.151 "is_configured": true, 00:11:20.152 "data_offset": 0, 00:11:20.152 "data_size": 65536 00:11:20.152 }, 00:11:20.152 { 00:11:20.152 "name": "BaseBdev2", 00:11:20.152 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:20.152 "is_configured": true, 00:11:20.152 "data_offset": 0, 00:11:20.152 "data_size": 65536 00:11:20.152 }, 00:11:20.152 { 00:11:20.152 "name": "BaseBdev3", 00:11:20.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.152 "is_configured": false, 00:11:20.152 "data_offset": 0, 00:11:20.152 "data_size": 0 00:11:20.152 }, 00:11:20.152 { 00:11:20.152 "name": "BaseBdev4", 00:11:20.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.152 "is_configured": false, 00:11:20.152 "data_offset": 0, 00:11:20.152 "data_size": 0 00:11:20.152 } 00:11:20.152 ] 00:11:20.152 }' 00:11:20.152 17:03:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.152 17:03:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.410 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.410 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.410 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.668 [2024-11-20 17:03:44.313646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.668 BaseBdev3 00:11:20.668 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.669 [ 00:11:20.669 { 00:11:20.669 "name": "BaseBdev3", 00:11:20.669 "aliases": [ 00:11:20.669 "3279b54d-d740-4129-8de9-7ab3a86a0da8" 00:11:20.669 ], 00:11:20.669 "product_name": "Malloc disk", 00:11:20.669 "block_size": 512, 00:11:20.669 "num_blocks": 65536, 00:11:20.669 "uuid": "3279b54d-d740-4129-8de9-7ab3a86a0da8", 00:11:20.669 "assigned_rate_limits": { 00:11:20.669 "rw_ios_per_sec": 0, 00:11:20.669 "rw_mbytes_per_sec": 0, 00:11:20.669 "r_mbytes_per_sec": 0, 00:11:20.669 "w_mbytes_per_sec": 0 00:11:20.669 }, 00:11:20.669 "claimed": true, 00:11:20.669 "claim_type": "exclusive_write", 00:11:20.669 "zoned": false, 00:11:20.669 "supported_io_types": { 00:11:20.669 "read": true, 00:11:20.669 "write": true, 00:11:20.669 "unmap": true, 00:11:20.669 "flush": true, 00:11:20.669 "reset": true, 00:11:20.669 "nvme_admin": false, 00:11:20.669 "nvme_io": false, 00:11:20.669 "nvme_io_md": false, 00:11:20.669 "write_zeroes": true, 00:11:20.669 "zcopy": true, 00:11:20.669 "get_zone_info": false, 00:11:20.669 "zone_management": false, 00:11:20.669 "zone_append": false, 00:11:20.669 "compare": false, 00:11:20.669 "compare_and_write": false, 00:11:20.669 "abort": true, 00:11:20.669 "seek_hole": false, 00:11:20.669 "seek_data": false, 00:11:20.669 "copy": true, 00:11:20.669 "nvme_iov_md": false 00:11:20.669 }, 00:11:20.669 "memory_domains": [ 00:11:20.669 { 00:11:20.669 "dma_device_id": "system", 00:11:20.669 "dma_device_type": 1 00:11:20.669 }, 00:11:20.669 { 00:11:20.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.669 "dma_device_type": 2 00:11:20.669 } 00:11:20.669 ], 00:11:20.669 "driver_specific": {} 00:11:20.669 } 00:11:20.669 ] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.669 "name": "Existed_Raid", 00:11:20.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.669 "strip_size_kb": 0, 00:11:20.669 "state": "configuring", 00:11:20.669 "raid_level": "raid1", 00:11:20.669 "superblock": false, 00:11:20.669 "num_base_bdevs": 4, 00:11:20.669 "num_base_bdevs_discovered": 3, 00:11:20.669 "num_base_bdevs_operational": 4, 00:11:20.669 "base_bdevs_list": [ 00:11:20.669 { 00:11:20.669 "name": "BaseBdev1", 00:11:20.669 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:20.669 "is_configured": true, 00:11:20.669 "data_offset": 0, 00:11:20.669 "data_size": 65536 00:11:20.669 }, 00:11:20.669 { 00:11:20.669 "name": "BaseBdev2", 00:11:20.669 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:20.669 "is_configured": true, 00:11:20.669 "data_offset": 0, 00:11:20.669 "data_size": 65536 00:11:20.669 }, 00:11:20.669 { 00:11:20.669 "name": "BaseBdev3", 00:11:20.669 "uuid": "3279b54d-d740-4129-8de9-7ab3a86a0da8", 00:11:20.669 "is_configured": true, 00:11:20.669 "data_offset": 0, 00:11:20.669 "data_size": 65536 00:11:20.669 }, 00:11:20.669 { 00:11:20.669 "name": "BaseBdev4", 00:11:20.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.669 "is_configured": false, 00:11:20.669 "data_offset": 0, 00:11:20.669 "data_size": 0 00:11:20.669 } 00:11:20.669 ] 00:11:20.669 }' 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.669 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 [2024-11-20 17:03:44.900331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.236 [2024-11-20 17:03:44.900595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.236 [2024-11-20 17:03:44.900620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:21.236 [2024-11-20 17:03:44.901017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.236 [2024-11-20 17:03:44.901311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.236 [2024-11-20 17:03:44.901331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.236 [2024-11-20 17:03:44.901650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.236 BaseBdev4 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.236 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.236 [ 00:11:21.236 { 00:11:21.236 "name": "BaseBdev4", 00:11:21.236 "aliases": [ 00:11:21.236 "5e4c64d8-f7c3-46e0-9315-19ac9cf63c2a" 00:11:21.236 ], 00:11:21.236 "product_name": "Malloc disk", 00:11:21.236 "block_size": 512, 00:11:21.236 "num_blocks": 65536, 00:11:21.236 "uuid": "5e4c64d8-f7c3-46e0-9315-19ac9cf63c2a", 00:11:21.236 "assigned_rate_limits": { 00:11:21.236 "rw_ios_per_sec": 0, 00:11:21.236 "rw_mbytes_per_sec": 0, 00:11:21.236 "r_mbytes_per_sec": 0, 00:11:21.236 "w_mbytes_per_sec": 0 00:11:21.236 }, 00:11:21.236 "claimed": true, 00:11:21.236 "claim_type": "exclusive_write", 00:11:21.236 "zoned": false, 00:11:21.236 "supported_io_types": { 00:11:21.236 "read": true, 00:11:21.236 "write": true, 00:11:21.236 "unmap": true, 00:11:21.236 "flush": true, 00:11:21.236 "reset": true, 00:11:21.236 "nvme_admin": false, 00:11:21.236 "nvme_io": false, 00:11:21.236 "nvme_io_md": false, 00:11:21.236 "write_zeroes": true, 00:11:21.236 "zcopy": true, 00:11:21.236 "get_zone_info": false, 00:11:21.237 "zone_management": false, 00:11:21.237 "zone_append": false, 00:11:21.237 "compare": false, 00:11:21.237 "compare_and_write": false, 00:11:21.237 "abort": true, 00:11:21.237 "seek_hole": false, 00:11:21.237 "seek_data": false, 00:11:21.237 "copy": true, 00:11:21.237 "nvme_iov_md": false 00:11:21.237 }, 00:11:21.237 "memory_domains": [ 00:11:21.237 { 00:11:21.237 "dma_device_id": "system", 00:11:21.237 "dma_device_type": 1 00:11:21.237 }, 00:11:21.237 { 00:11:21.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.237 "dma_device_type": 2 00:11:21.237 } 00:11:21.237 ], 00:11:21.237 "driver_specific": {} 00:11:21.237 } 00:11:21.237 ] 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.237 "name": "Existed_Raid", 00:11:21.237 "uuid": "f5988262-4bf5-41c0-8eb6-3d440a9e506e", 00:11:21.237 "strip_size_kb": 0, 00:11:21.237 "state": "online", 00:11:21.237 "raid_level": "raid1", 00:11:21.237 "superblock": false, 00:11:21.237 "num_base_bdevs": 4, 00:11:21.237 "num_base_bdevs_discovered": 4, 00:11:21.237 "num_base_bdevs_operational": 4, 00:11:21.237 "base_bdevs_list": [ 00:11:21.237 { 00:11:21.237 "name": "BaseBdev1", 00:11:21.237 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:21.237 "is_configured": true, 00:11:21.237 "data_offset": 0, 00:11:21.237 "data_size": 65536 00:11:21.237 }, 00:11:21.237 { 00:11:21.237 "name": "BaseBdev2", 00:11:21.237 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:21.237 "is_configured": true, 00:11:21.237 "data_offset": 0, 00:11:21.237 "data_size": 65536 00:11:21.237 }, 00:11:21.237 { 00:11:21.237 "name": "BaseBdev3", 00:11:21.237 "uuid": "3279b54d-d740-4129-8de9-7ab3a86a0da8", 00:11:21.237 "is_configured": true, 00:11:21.237 "data_offset": 0, 00:11:21.237 "data_size": 65536 00:11:21.237 }, 00:11:21.237 { 00:11:21.237 "name": "BaseBdev4", 00:11:21.237 "uuid": "5e4c64d8-f7c3-46e0-9315-19ac9cf63c2a", 00:11:21.237 "is_configured": true, 00:11:21.237 "data_offset": 0, 00:11:21.237 "data_size": 65536 00:11:21.237 } 00:11:21.237 ] 00:11:21.237 }' 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.237 17:03:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.804 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.805 [2024-11-20 17:03:45.456992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.805 "name": "Existed_Raid", 00:11:21.805 "aliases": [ 00:11:21.805 "f5988262-4bf5-41c0-8eb6-3d440a9e506e" 00:11:21.805 ], 00:11:21.805 "product_name": "Raid Volume", 00:11:21.805 "block_size": 512, 00:11:21.805 "num_blocks": 65536, 00:11:21.805 "uuid": "f5988262-4bf5-41c0-8eb6-3d440a9e506e", 00:11:21.805 "assigned_rate_limits": { 00:11:21.805 "rw_ios_per_sec": 0, 00:11:21.805 "rw_mbytes_per_sec": 0, 00:11:21.805 "r_mbytes_per_sec": 0, 00:11:21.805 "w_mbytes_per_sec": 0 00:11:21.805 }, 00:11:21.805 "claimed": false, 00:11:21.805 "zoned": false, 00:11:21.805 "supported_io_types": { 00:11:21.805 "read": true, 00:11:21.805 "write": true, 00:11:21.805 "unmap": false, 00:11:21.805 "flush": false, 00:11:21.805 "reset": true, 00:11:21.805 "nvme_admin": false, 00:11:21.805 "nvme_io": false, 00:11:21.805 "nvme_io_md": false, 00:11:21.805 "write_zeroes": true, 00:11:21.805 "zcopy": false, 00:11:21.805 "get_zone_info": false, 00:11:21.805 "zone_management": false, 00:11:21.805 "zone_append": false, 00:11:21.805 "compare": false, 00:11:21.805 "compare_and_write": false, 00:11:21.805 "abort": false, 00:11:21.805 "seek_hole": false, 00:11:21.805 "seek_data": false, 00:11:21.805 "copy": false, 00:11:21.805 "nvme_iov_md": false 00:11:21.805 }, 00:11:21.805 "memory_domains": [ 00:11:21.805 { 00:11:21.805 "dma_device_id": "system", 00:11:21.805 "dma_device_type": 1 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.805 "dma_device_type": 2 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "system", 00:11:21.805 "dma_device_type": 1 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.805 "dma_device_type": 2 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "system", 00:11:21.805 "dma_device_type": 1 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.805 "dma_device_type": 2 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "system", 00:11:21.805 "dma_device_type": 1 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.805 "dma_device_type": 2 00:11:21.805 } 00:11:21.805 ], 00:11:21.805 "driver_specific": { 00:11:21.805 "raid": { 00:11:21.805 "uuid": "f5988262-4bf5-41c0-8eb6-3d440a9e506e", 00:11:21.805 "strip_size_kb": 0, 00:11:21.805 "state": "online", 00:11:21.805 "raid_level": "raid1", 00:11:21.805 "superblock": false, 00:11:21.805 "num_base_bdevs": 4, 00:11:21.805 "num_base_bdevs_discovered": 4, 00:11:21.805 "num_base_bdevs_operational": 4, 00:11:21.805 "base_bdevs_list": [ 00:11:21.805 { 00:11:21.805 "name": "BaseBdev1", 00:11:21.805 "uuid": "956b872f-76c2-4349-9d74-95c010064c2c", 00:11:21.805 "is_configured": true, 00:11:21.805 "data_offset": 0, 00:11:21.805 "data_size": 65536 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "name": "BaseBdev2", 00:11:21.805 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:21.805 "is_configured": true, 00:11:21.805 "data_offset": 0, 00:11:21.805 "data_size": 65536 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "name": "BaseBdev3", 00:11:21.805 "uuid": "3279b54d-d740-4129-8de9-7ab3a86a0da8", 00:11:21.805 "is_configured": true, 00:11:21.805 "data_offset": 0, 00:11:21.805 "data_size": 65536 00:11:21.805 }, 00:11:21.805 { 00:11:21.805 "name": "BaseBdev4", 00:11:21.805 "uuid": "5e4c64d8-f7c3-46e0-9315-19ac9cf63c2a", 00:11:21.805 "is_configured": true, 00:11:21.805 "data_offset": 0, 00:11:21.805 "data_size": 65536 00:11:21.805 } 00:11:21.805 ] 00:11:21.805 } 00:11:21.805 } 00:11:21.805 }' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:21.805 BaseBdev2 00:11:21.805 BaseBdev3 00:11:21.805 BaseBdev4' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.805 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 [2024-11-20 17:03:45.836723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.064 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.323 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.323 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.323 "name": "Existed_Raid", 00:11:22.323 "uuid": "f5988262-4bf5-41c0-8eb6-3d440a9e506e", 00:11:22.323 "strip_size_kb": 0, 00:11:22.323 "state": "online", 00:11:22.323 "raid_level": "raid1", 00:11:22.323 "superblock": false, 00:11:22.323 "num_base_bdevs": 4, 00:11:22.323 "num_base_bdevs_discovered": 3, 00:11:22.323 "num_base_bdevs_operational": 3, 00:11:22.323 "base_bdevs_list": [ 00:11:22.323 { 00:11:22.323 "name": null, 00:11:22.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.323 "is_configured": false, 00:11:22.323 "data_offset": 0, 00:11:22.323 "data_size": 65536 00:11:22.323 }, 00:11:22.323 { 00:11:22.323 "name": "BaseBdev2", 00:11:22.323 "uuid": "aed0d493-5831-4a42-9332-f336ac7ba484", 00:11:22.323 "is_configured": true, 00:11:22.323 "data_offset": 0, 00:11:22.323 "data_size": 65536 00:11:22.323 }, 00:11:22.323 { 00:11:22.323 "name": "BaseBdev3", 00:11:22.323 "uuid": "3279b54d-d740-4129-8de9-7ab3a86a0da8", 00:11:22.323 "is_configured": true, 00:11:22.323 "data_offset": 0, 00:11:22.323 "data_size": 65536 00:11:22.323 }, 00:11:22.323 { 00:11:22.323 "name": "BaseBdev4", 00:11:22.323 "uuid": "5e4c64d8-f7c3-46e0-9315-19ac9cf63c2a", 00:11:22.323 "is_configured": true, 00:11:22.323 "data_offset": 0, 00:11:22.323 "data_size": 65536 00:11:22.323 } 00:11:22.323 ] 00:11:22.323 }' 00:11:22.323 17:03:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.323 17:03:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.582 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.840 [2024-11-20 17:03:46.490261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.840 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.840 [2024-11-20 17:03:46.630466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.099 [2024-11-20 17:03:46.771814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:23.099 [2024-11-20 17:03:46.771922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.099 [2024-11-20 17:03:46.848179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.099 [2024-11-20 17:03:46.848447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.099 [2024-11-20 17:03:46.848482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.099 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.100 BaseBdev2 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.100 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.359 [ 00:11:23.359 { 00:11:23.359 "name": "BaseBdev2", 00:11:23.359 "aliases": [ 00:11:23.359 "fff6a659-1135-48ba-81e9-ebb932b6c8d7" 00:11:23.359 ], 00:11:23.359 "product_name": "Malloc disk", 00:11:23.359 "block_size": 512, 00:11:23.359 "num_blocks": 65536, 00:11:23.359 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:23.359 "assigned_rate_limits": { 00:11:23.359 "rw_ios_per_sec": 0, 00:11:23.359 "rw_mbytes_per_sec": 0, 00:11:23.359 "r_mbytes_per_sec": 0, 00:11:23.359 "w_mbytes_per_sec": 0 00:11:23.359 }, 00:11:23.359 "claimed": false, 00:11:23.359 "zoned": false, 00:11:23.359 "supported_io_types": { 00:11:23.359 "read": true, 00:11:23.359 "write": true, 00:11:23.359 "unmap": true, 00:11:23.359 "flush": true, 00:11:23.359 "reset": true, 00:11:23.359 "nvme_admin": false, 00:11:23.359 "nvme_io": false, 00:11:23.359 "nvme_io_md": false, 00:11:23.359 "write_zeroes": true, 00:11:23.359 "zcopy": true, 00:11:23.359 "get_zone_info": false, 00:11:23.359 "zone_management": false, 00:11:23.359 "zone_append": false, 00:11:23.359 "compare": false, 00:11:23.359 "compare_and_write": false, 00:11:23.359 "abort": true, 00:11:23.359 "seek_hole": false, 00:11:23.359 "seek_data": false, 00:11:23.359 "copy": true, 00:11:23.359 "nvme_iov_md": false 00:11:23.359 }, 00:11:23.359 "memory_domains": [ 00:11:23.359 { 00:11:23.359 "dma_device_id": "system", 00:11:23.359 "dma_device_type": 1 00:11:23.359 }, 00:11:23.359 { 00:11:23.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.359 "dma_device_type": 2 00:11:23.359 } 00:11:23.359 ], 00:11:23.359 "driver_specific": {} 00:11:23.359 } 00:11:23.359 ] 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.359 17:03:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.359 BaseBdev3 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 [ 00:11:23.360 { 00:11:23.360 "name": "BaseBdev3", 00:11:23.360 "aliases": [ 00:11:23.360 "ee7db1a8-0dad-469a-933d-44127a5d8fab" 00:11:23.360 ], 00:11:23.360 "product_name": "Malloc disk", 00:11:23.360 "block_size": 512, 00:11:23.360 "num_blocks": 65536, 00:11:23.360 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:23.360 "assigned_rate_limits": { 00:11:23.360 "rw_ios_per_sec": 0, 00:11:23.360 "rw_mbytes_per_sec": 0, 00:11:23.360 "r_mbytes_per_sec": 0, 00:11:23.360 "w_mbytes_per_sec": 0 00:11:23.360 }, 00:11:23.360 "claimed": false, 00:11:23.360 "zoned": false, 00:11:23.360 "supported_io_types": { 00:11:23.360 "read": true, 00:11:23.360 "write": true, 00:11:23.360 "unmap": true, 00:11:23.360 "flush": true, 00:11:23.360 "reset": true, 00:11:23.360 "nvme_admin": false, 00:11:23.360 "nvme_io": false, 00:11:23.360 "nvme_io_md": false, 00:11:23.360 "write_zeroes": true, 00:11:23.360 "zcopy": true, 00:11:23.360 "get_zone_info": false, 00:11:23.360 "zone_management": false, 00:11:23.360 "zone_append": false, 00:11:23.360 "compare": false, 00:11:23.360 "compare_and_write": false, 00:11:23.360 "abort": true, 00:11:23.360 "seek_hole": false, 00:11:23.360 "seek_data": false, 00:11:23.360 "copy": true, 00:11:23.360 "nvme_iov_md": false 00:11:23.360 }, 00:11:23.360 "memory_domains": [ 00:11:23.360 { 00:11:23.360 "dma_device_id": "system", 00:11:23.360 "dma_device_type": 1 00:11:23.360 }, 00:11:23.360 { 00:11:23.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.360 "dma_device_type": 2 00:11:23.360 } 00:11:23.360 ], 00:11:23.360 "driver_specific": {} 00:11:23.360 } 00:11:23.360 ] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 BaseBdev4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 [ 00:11:23.360 { 00:11:23.360 "name": "BaseBdev4", 00:11:23.360 "aliases": [ 00:11:23.360 "754aea9f-e59b-4dab-88dc-4f46f287c910" 00:11:23.360 ], 00:11:23.360 "product_name": "Malloc disk", 00:11:23.360 "block_size": 512, 00:11:23.360 "num_blocks": 65536, 00:11:23.360 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:23.360 "assigned_rate_limits": { 00:11:23.360 "rw_ios_per_sec": 0, 00:11:23.360 "rw_mbytes_per_sec": 0, 00:11:23.360 "r_mbytes_per_sec": 0, 00:11:23.360 "w_mbytes_per_sec": 0 00:11:23.360 }, 00:11:23.360 "claimed": false, 00:11:23.360 "zoned": false, 00:11:23.360 "supported_io_types": { 00:11:23.360 "read": true, 00:11:23.360 "write": true, 00:11:23.360 "unmap": true, 00:11:23.360 "flush": true, 00:11:23.360 "reset": true, 00:11:23.360 "nvme_admin": false, 00:11:23.360 "nvme_io": false, 00:11:23.360 "nvme_io_md": false, 00:11:23.360 "write_zeroes": true, 00:11:23.360 "zcopy": true, 00:11:23.360 "get_zone_info": false, 00:11:23.360 "zone_management": false, 00:11:23.360 "zone_append": false, 00:11:23.360 "compare": false, 00:11:23.360 "compare_and_write": false, 00:11:23.360 "abort": true, 00:11:23.360 "seek_hole": false, 00:11:23.360 "seek_data": false, 00:11:23.360 "copy": true, 00:11:23.360 "nvme_iov_md": false 00:11:23.360 }, 00:11:23.360 "memory_domains": [ 00:11:23.360 { 00:11:23.360 "dma_device_id": "system", 00:11:23.360 "dma_device_type": 1 00:11:23.360 }, 00:11:23.360 { 00:11:23.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.360 "dma_device_type": 2 00:11:23.360 } 00:11:23.360 ], 00:11:23.360 "driver_specific": {} 00:11:23.360 } 00:11:23.360 ] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 [2024-11-20 17:03:47.133883] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.360 [2024-11-20 17:03:47.134085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.360 [2024-11-20 17:03:47.134223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.360 [2024-11-20 17:03:47.136715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.360 [2024-11-20 17:03:47.136921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.360 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.360 "name": "Existed_Raid", 00:11:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.360 "strip_size_kb": 0, 00:11:23.360 "state": "configuring", 00:11:23.360 "raid_level": "raid1", 00:11:23.360 "superblock": false, 00:11:23.360 "num_base_bdevs": 4, 00:11:23.360 "num_base_bdevs_discovered": 3, 00:11:23.360 "num_base_bdevs_operational": 4, 00:11:23.360 "base_bdevs_list": [ 00:11:23.360 { 00:11:23.360 "name": "BaseBdev1", 00:11:23.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.360 "is_configured": false, 00:11:23.360 "data_offset": 0, 00:11:23.361 "data_size": 0 00:11:23.361 }, 00:11:23.361 { 00:11:23.361 "name": "BaseBdev2", 00:11:23.361 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:23.361 "is_configured": true, 00:11:23.361 "data_offset": 0, 00:11:23.361 "data_size": 65536 00:11:23.361 }, 00:11:23.361 { 00:11:23.361 "name": "BaseBdev3", 00:11:23.361 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:23.361 "is_configured": true, 00:11:23.361 "data_offset": 0, 00:11:23.361 "data_size": 65536 00:11:23.361 }, 00:11:23.361 { 00:11:23.361 "name": "BaseBdev4", 00:11:23.361 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:23.361 "is_configured": true, 00:11:23.361 "data_offset": 0, 00:11:23.361 "data_size": 65536 00:11:23.361 } 00:11:23.361 ] 00:11:23.361 }' 00:11:23.361 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.361 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.927 [2024-11-20 17:03:47.662112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.927 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.927 "name": "Existed_Raid", 00:11:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.927 "strip_size_kb": 0, 00:11:23.927 "state": "configuring", 00:11:23.927 "raid_level": "raid1", 00:11:23.927 "superblock": false, 00:11:23.927 "num_base_bdevs": 4, 00:11:23.927 "num_base_bdevs_discovered": 2, 00:11:23.927 "num_base_bdevs_operational": 4, 00:11:23.927 "base_bdevs_list": [ 00:11:23.927 { 00:11:23.927 "name": "BaseBdev1", 00:11:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.927 "is_configured": false, 00:11:23.927 "data_offset": 0, 00:11:23.928 "data_size": 0 00:11:23.928 }, 00:11:23.928 { 00:11:23.928 "name": null, 00:11:23.928 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:23.928 "is_configured": false, 00:11:23.928 "data_offset": 0, 00:11:23.928 "data_size": 65536 00:11:23.928 }, 00:11:23.928 { 00:11:23.928 "name": "BaseBdev3", 00:11:23.928 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:23.928 "is_configured": true, 00:11:23.928 "data_offset": 0, 00:11:23.928 "data_size": 65536 00:11:23.928 }, 00:11:23.928 { 00:11:23.928 "name": "BaseBdev4", 00:11:23.928 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:23.928 "is_configured": true, 00:11:23.928 "data_offset": 0, 00:11:23.928 "data_size": 65536 00:11:23.928 } 00:11:23.928 ] 00:11:23.928 }' 00:11:23.928 17:03:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.928 17:03:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.494 [2024-11-20 17:03:48.264310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.494 BaseBdev1 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.494 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.494 [ 00:11:24.494 { 00:11:24.494 "name": "BaseBdev1", 00:11:24.494 "aliases": [ 00:11:24.494 "3b120f3d-e06f-4296-9f53-8c0d6d55b407" 00:11:24.494 ], 00:11:24.494 "product_name": "Malloc disk", 00:11:24.494 "block_size": 512, 00:11:24.494 "num_blocks": 65536, 00:11:24.494 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:24.494 "assigned_rate_limits": { 00:11:24.494 "rw_ios_per_sec": 0, 00:11:24.494 "rw_mbytes_per_sec": 0, 00:11:24.494 "r_mbytes_per_sec": 0, 00:11:24.494 "w_mbytes_per_sec": 0 00:11:24.494 }, 00:11:24.494 "claimed": true, 00:11:24.494 "claim_type": "exclusive_write", 00:11:24.494 "zoned": false, 00:11:24.494 "supported_io_types": { 00:11:24.494 "read": true, 00:11:24.494 "write": true, 00:11:24.495 "unmap": true, 00:11:24.495 "flush": true, 00:11:24.495 "reset": true, 00:11:24.495 "nvme_admin": false, 00:11:24.495 "nvme_io": false, 00:11:24.495 "nvme_io_md": false, 00:11:24.495 "write_zeroes": true, 00:11:24.495 "zcopy": true, 00:11:24.495 "get_zone_info": false, 00:11:24.495 "zone_management": false, 00:11:24.495 "zone_append": false, 00:11:24.495 "compare": false, 00:11:24.495 "compare_and_write": false, 00:11:24.495 "abort": true, 00:11:24.495 "seek_hole": false, 00:11:24.495 "seek_data": false, 00:11:24.495 "copy": true, 00:11:24.495 "nvme_iov_md": false 00:11:24.495 }, 00:11:24.495 "memory_domains": [ 00:11:24.495 { 00:11:24.495 "dma_device_id": "system", 00:11:24.495 "dma_device_type": 1 00:11:24.495 }, 00:11:24.495 { 00:11:24.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.495 "dma_device_type": 2 00:11:24.495 } 00:11:24.495 ], 00:11:24.495 "driver_specific": {} 00:11:24.495 } 00:11:24.495 ] 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.495 "name": "Existed_Raid", 00:11:24.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.495 "strip_size_kb": 0, 00:11:24.495 "state": "configuring", 00:11:24.495 "raid_level": "raid1", 00:11:24.495 "superblock": false, 00:11:24.495 "num_base_bdevs": 4, 00:11:24.495 "num_base_bdevs_discovered": 3, 00:11:24.495 "num_base_bdevs_operational": 4, 00:11:24.495 "base_bdevs_list": [ 00:11:24.495 { 00:11:24.495 "name": "BaseBdev1", 00:11:24.495 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:24.495 "is_configured": true, 00:11:24.495 "data_offset": 0, 00:11:24.495 "data_size": 65536 00:11:24.495 }, 00:11:24.495 { 00:11:24.495 "name": null, 00:11:24.495 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:24.495 "is_configured": false, 00:11:24.495 "data_offset": 0, 00:11:24.495 "data_size": 65536 00:11:24.495 }, 00:11:24.495 { 00:11:24.495 "name": "BaseBdev3", 00:11:24.495 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:24.495 "is_configured": true, 00:11:24.495 "data_offset": 0, 00:11:24.495 "data_size": 65536 00:11:24.495 }, 00:11:24.495 { 00:11:24.495 "name": "BaseBdev4", 00:11:24.495 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:24.495 "is_configured": true, 00:11:24.495 "data_offset": 0, 00:11:24.495 "data_size": 65536 00:11:24.495 } 00:11:24.495 ] 00:11:24.495 }' 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.495 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:25.062 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.063 [2024-11-20 17:03:48.876573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.063 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.322 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.322 "name": "Existed_Raid", 00:11:25.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.322 "strip_size_kb": 0, 00:11:25.322 "state": "configuring", 00:11:25.322 "raid_level": "raid1", 00:11:25.322 "superblock": false, 00:11:25.322 "num_base_bdevs": 4, 00:11:25.322 "num_base_bdevs_discovered": 2, 00:11:25.322 "num_base_bdevs_operational": 4, 00:11:25.322 "base_bdevs_list": [ 00:11:25.322 { 00:11:25.322 "name": "BaseBdev1", 00:11:25.322 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:25.322 "is_configured": true, 00:11:25.322 "data_offset": 0, 00:11:25.322 "data_size": 65536 00:11:25.322 }, 00:11:25.322 { 00:11:25.322 "name": null, 00:11:25.322 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:25.322 "is_configured": false, 00:11:25.322 "data_offset": 0, 00:11:25.322 "data_size": 65536 00:11:25.322 }, 00:11:25.322 { 00:11:25.322 "name": null, 00:11:25.322 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:25.322 "is_configured": false, 00:11:25.322 "data_offset": 0, 00:11:25.322 "data_size": 65536 00:11:25.322 }, 00:11:25.322 { 00:11:25.322 "name": "BaseBdev4", 00:11:25.322 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:25.322 "is_configured": true, 00:11:25.322 "data_offset": 0, 00:11:25.322 "data_size": 65536 00:11:25.322 } 00:11:25.322 ] 00:11:25.322 }' 00:11:25.322 17:03:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.322 17:03:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.579 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.579 [2024-11-20 17:03:49.444752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.837 "name": "Existed_Raid", 00:11:25.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.837 "strip_size_kb": 0, 00:11:25.837 "state": "configuring", 00:11:25.837 "raid_level": "raid1", 00:11:25.837 "superblock": false, 00:11:25.837 "num_base_bdevs": 4, 00:11:25.837 "num_base_bdevs_discovered": 3, 00:11:25.837 "num_base_bdevs_operational": 4, 00:11:25.837 "base_bdevs_list": [ 00:11:25.837 { 00:11:25.837 "name": "BaseBdev1", 00:11:25.837 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:25.837 "is_configured": true, 00:11:25.837 "data_offset": 0, 00:11:25.837 "data_size": 65536 00:11:25.837 }, 00:11:25.837 { 00:11:25.837 "name": null, 00:11:25.837 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:25.837 "is_configured": false, 00:11:25.837 "data_offset": 0, 00:11:25.837 "data_size": 65536 00:11:25.837 }, 00:11:25.837 { 00:11:25.837 "name": "BaseBdev3", 00:11:25.837 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:25.837 "is_configured": true, 00:11:25.837 "data_offset": 0, 00:11:25.837 "data_size": 65536 00:11:25.837 }, 00:11:25.837 { 00:11:25.837 "name": "BaseBdev4", 00:11:25.837 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:25.837 "is_configured": true, 00:11:25.837 "data_offset": 0, 00:11:25.837 "data_size": 65536 00:11:25.837 } 00:11:25.837 ] 00:11:25.837 }' 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.837 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.404 17:03:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.404 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.404 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 17:03:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 [2024-11-20 17:03:50.040989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.404 "name": "Existed_Raid", 00:11:26.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.404 "strip_size_kb": 0, 00:11:26.404 "state": "configuring", 00:11:26.404 "raid_level": "raid1", 00:11:26.404 "superblock": false, 00:11:26.404 "num_base_bdevs": 4, 00:11:26.404 "num_base_bdevs_discovered": 2, 00:11:26.404 "num_base_bdevs_operational": 4, 00:11:26.404 "base_bdevs_list": [ 00:11:26.404 { 00:11:26.404 "name": null, 00:11:26.404 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:26.404 "is_configured": false, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": null, 00:11:26.404 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:26.404 "is_configured": false, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": "BaseBdev3", 00:11:26.404 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:26.404 "is_configured": true, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 }, 00:11:26.404 { 00:11:26.404 "name": "BaseBdev4", 00:11:26.404 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:26.404 "is_configured": true, 00:11:26.404 "data_offset": 0, 00:11:26.404 "data_size": 65536 00:11:26.404 } 00:11:26.404 ] 00:11:26.404 }' 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.404 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.971 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.972 [2024-11-20 17:03:50.702093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.972 "name": "Existed_Raid", 00:11:26.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.972 "strip_size_kb": 0, 00:11:26.972 "state": "configuring", 00:11:26.972 "raid_level": "raid1", 00:11:26.972 "superblock": false, 00:11:26.972 "num_base_bdevs": 4, 00:11:26.972 "num_base_bdevs_discovered": 3, 00:11:26.972 "num_base_bdevs_operational": 4, 00:11:26.972 "base_bdevs_list": [ 00:11:26.972 { 00:11:26.972 "name": null, 00:11:26.972 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:26.972 "is_configured": false, 00:11:26.972 "data_offset": 0, 00:11:26.972 "data_size": 65536 00:11:26.972 }, 00:11:26.972 { 00:11:26.972 "name": "BaseBdev2", 00:11:26.972 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:26.972 "is_configured": true, 00:11:26.972 "data_offset": 0, 00:11:26.972 "data_size": 65536 00:11:26.972 }, 00:11:26.972 { 00:11:26.972 "name": "BaseBdev3", 00:11:26.972 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:26.972 "is_configured": true, 00:11:26.972 "data_offset": 0, 00:11:26.972 "data_size": 65536 00:11:26.972 }, 00:11:26.972 { 00:11:26.972 "name": "BaseBdev4", 00:11:26.972 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:26.972 "is_configured": true, 00:11:26.972 "data_offset": 0, 00:11:26.972 "data_size": 65536 00:11:26.972 } 00:11:26.972 ] 00:11:26.972 }' 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.972 17:03:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b120f3d-e06f-4296-9f53-8c0d6d55b407 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 [2024-11-20 17:03:51.368629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:27.555 [2024-11-20 17:03:51.368854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:27.555 [2024-11-20 17:03:51.368885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:27.555 [2024-11-20 17:03:51.369228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:27.555 [2024-11-20 17:03:51.369434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:27.555 [2024-11-20 17:03:51.369450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:27.555 [2024-11-20 17:03:51.369739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.555 NewBaseBdev 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 [ 00:11:27.555 { 00:11:27.555 "name": "NewBaseBdev", 00:11:27.555 "aliases": [ 00:11:27.555 "3b120f3d-e06f-4296-9f53-8c0d6d55b407" 00:11:27.555 ], 00:11:27.555 "product_name": "Malloc disk", 00:11:27.555 "block_size": 512, 00:11:27.555 "num_blocks": 65536, 00:11:27.555 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:27.555 "assigned_rate_limits": { 00:11:27.555 "rw_ios_per_sec": 0, 00:11:27.555 "rw_mbytes_per_sec": 0, 00:11:27.555 "r_mbytes_per_sec": 0, 00:11:27.555 "w_mbytes_per_sec": 0 00:11:27.555 }, 00:11:27.555 "claimed": true, 00:11:27.555 "claim_type": "exclusive_write", 00:11:27.555 "zoned": false, 00:11:27.555 "supported_io_types": { 00:11:27.555 "read": true, 00:11:27.555 "write": true, 00:11:27.555 "unmap": true, 00:11:27.555 "flush": true, 00:11:27.555 "reset": true, 00:11:27.555 "nvme_admin": false, 00:11:27.555 "nvme_io": false, 00:11:27.555 "nvme_io_md": false, 00:11:27.555 "write_zeroes": true, 00:11:27.555 "zcopy": true, 00:11:27.555 "get_zone_info": false, 00:11:27.555 "zone_management": false, 00:11:27.555 "zone_append": false, 00:11:27.555 "compare": false, 00:11:27.555 "compare_and_write": false, 00:11:27.555 "abort": true, 00:11:27.555 "seek_hole": false, 00:11:27.555 "seek_data": false, 00:11:27.555 "copy": true, 00:11:27.555 "nvme_iov_md": false 00:11:27.555 }, 00:11:27.555 "memory_domains": [ 00:11:27.555 { 00:11:27.555 "dma_device_id": "system", 00:11:27.555 "dma_device_type": 1 00:11:27.555 }, 00:11:27.555 { 00:11:27.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.555 "dma_device_type": 2 00:11:27.555 } 00:11:27.555 ], 00:11:27.555 "driver_specific": {} 00:11:27.555 } 00:11:27.555 ] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.555 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.830 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.830 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.830 "name": "Existed_Raid", 00:11:27.830 "uuid": "10e5771a-7ccf-4ff4-ab21-e4ee3a972ffb", 00:11:27.830 "strip_size_kb": 0, 00:11:27.830 "state": "online", 00:11:27.830 "raid_level": "raid1", 00:11:27.830 "superblock": false, 00:11:27.830 "num_base_bdevs": 4, 00:11:27.830 "num_base_bdevs_discovered": 4, 00:11:27.830 "num_base_bdevs_operational": 4, 00:11:27.830 "base_bdevs_list": [ 00:11:27.830 { 00:11:27.830 "name": "NewBaseBdev", 00:11:27.830 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:27.830 "is_configured": true, 00:11:27.830 "data_offset": 0, 00:11:27.830 "data_size": 65536 00:11:27.830 }, 00:11:27.830 { 00:11:27.830 "name": "BaseBdev2", 00:11:27.830 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:27.830 "is_configured": true, 00:11:27.830 "data_offset": 0, 00:11:27.830 "data_size": 65536 00:11:27.830 }, 00:11:27.830 { 00:11:27.830 "name": "BaseBdev3", 00:11:27.830 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:27.830 "is_configured": true, 00:11:27.830 "data_offset": 0, 00:11:27.830 "data_size": 65536 00:11:27.830 }, 00:11:27.830 { 00:11:27.830 "name": "BaseBdev4", 00:11:27.830 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:27.830 "is_configured": true, 00:11:27.830 "data_offset": 0, 00:11:27.830 "data_size": 65536 00:11:27.830 } 00:11:27.830 ] 00:11:27.830 }' 00:11:27.830 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.830 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.089 [2024-11-20 17:03:51.897344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.089 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.089 "name": "Existed_Raid", 00:11:28.089 "aliases": [ 00:11:28.089 "10e5771a-7ccf-4ff4-ab21-e4ee3a972ffb" 00:11:28.089 ], 00:11:28.089 "product_name": "Raid Volume", 00:11:28.089 "block_size": 512, 00:11:28.089 "num_blocks": 65536, 00:11:28.089 "uuid": "10e5771a-7ccf-4ff4-ab21-e4ee3a972ffb", 00:11:28.089 "assigned_rate_limits": { 00:11:28.089 "rw_ios_per_sec": 0, 00:11:28.089 "rw_mbytes_per_sec": 0, 00:11:28.089 "r_mbytes_per_sec": 0, 00:11:28.089 "w_mbytes_per_sec": 0 00:11:28.089 }, 00:11:28.089 "claimed": false, 00:11:28.089 "zoned": false, 00:11:28.089 "supported_io_types": { 00:11:28.089 "read": true, 00:11:28.089 "write": true, 00:11:28.089 "unmap": false, 00:11:28.089 "flush": false, 00:11:28.089 "reset": true, 00:11:28.089 "nvme_admin": false, 00:11:28.089 "nvme_io": false, 00:11:28.089 "nvme_io_md": false, 00:11:28.089 "write_zeroes": true, 00:11:28.089 "zcopy": false, 00:11:28.089 "get_zone_info": false, 00:11:28.089 "zone_management": false, 00:11:28.089 "zone_append": false, 00:11:28.089 "compare": false, 00:11:28.089 "compare_and_write": false, 00:11:28.089 "abort": false, 00:11:28.089 "seek_hole": false, 00:11:28.089 "seek_data": false, 00:11:28.089 "copy": false, 00:11:28.089 "nvme_iov_md": false 00:11:28.089 }, 00:11:28.089 "memory_domains": [ 00:11:28.089 { 00:11:28.090 "dma_device_id": "system", 00:11:28.090 "dma_device_type": 1 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.090 "dma_device_type": 2 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "system", 00:11:28.090 "dma_device_type": 1 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.090 "dma_device_type": 2 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "system", 00:11:28.090 "dma_device_type": 1 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.090 "dma_device_type": 2 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "system", 00:11:28.090 "dma_device_type": 1 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.090 "dma_device_type": 2 00:11:28.090 } 00:11:28.090 ], 00:11:28.090 "driver_specific": { 00:11:28.090 "raid": { 00:11:28.090 "uuid": "10e5771a-7ccf-4ff4-ab21-e4ee3a972ffb", 00:11:28.090 "strip_size_kb": 0, 00:11:28.090 "state": "online", 00:11:28.090 "raid_level": "raid1", 00:11:28.090 "superblock": false, 00:11:28.090 "num_base_bdevs": 4, 00:11:28.090 "num_base_bdevs_discovered": 4, 00:11:28.090 "num_base_bdevs_operational": 4, 00:11:28.090 "base_bdevs_list": [ 00:11:28.090 { 00:11:28.090 "name": "NewBaseBdev", 00:11:28.090 "uuid": "3b120f3d-e06f-4296-9f53-8c0d6d55b407", 00:11:28.090 "is_configured": true, 00:11:28.090 "data_offset": 0, 00:11:28.090 "data_size": 65536 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "name": "BaseBdev2", 00:11:28.090 "uuid": "fff6a659-1135-48ba-81e9-ebb932b6c8d7", 00:11:28.090 "is_configured": true, 00:11:28.090 "data_offset": 0, 00:11:28.090 "data_size": 65536 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "name": "BaseBdev3", 00:11:28.090 "uuid": "ee7db1a8-0dad-469a-933d-44127a5d8fab", 00:11:28.090 "is_configured": true, 00:11:28.090 "data_offset": 0, 00:11:28.090 "data_size": 65536 00:11:28.090 }, 00:11:28.090 { 00:11:28.090 "name": "BaseBdev4", 00:11:28.090 "uuid": "754aea9f-e59b-4dab-88dc-4f46f287c910", 00:11:28.090 "is_configured": true, 00:11:28.090 "data_offset": 0, 00:11:28.090 "data_size": 65536 00:11:28.090 } 00:11:28.090 ] 00:11:28.090 } 00:11:28.090 } 00:11:28.090 }' 00:11:28.090 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.349 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.349 BaseBdev2 00:11:28.349 BaseBdev3 00:11:28.349 BaseBdev4' 00:11:28.349 17:03:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.350 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.609 [2024-11-20 17:03:52.260902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.609 [2024-11-20 17:03:52.260930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.609 [2024-11-20 17:03:52.261014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.609 [2024-11-20 17:03:52.261361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.609 [2024-11-20 17:03:52.261381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73138 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73138 ']' 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73138 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73138 00:11:28.609 killing process with pid 73138 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73138' 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73138 00:11:28.609 [2024-11-20 17:03:52.304238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.609 17:03:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73138 00:11:28.869 [2024-11-20 17:03:52.626430] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:29.807 00:11:29.807 real 0m12.573s 00:11:29.807 user 0m20.950s 00:11:29.807 sys 0m1.748s 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.807 ************************************ 00:11:29.807 END TEST raid_state_function_test 00:11:29.807 ************************************ 00:11:29.807 17:03:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:29.807 17:03:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.807 17:03:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.807 17:03:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.807 ************************************ 00:11:29.807 START TEST raid_state_function_test_sb 00:11:29.807 ************************************ 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.807 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73819 00:11:30.067 Process raid pid: 73819 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73819' 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73819 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73819 ']' 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.067 17:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.067 [2024-11-20 17:03:53.783815] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:30.067 [2024-11-20 17:03:53.784004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.326 [2024-11-20 17:03:53.981034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.326 [2024-11-20 17:03:54.103261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.585 [2024-11-20 17:03:54.294986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.585 [2024-11-20 17:03:54.295045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.154 [2024-11-20 17:03:54.745189] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.154 [2024-11-20 17:03:54.745273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.154 [2024-11-20 17:03:54.745289] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.154 [2024-11-20 17:03:54.745304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.154 [2024-11-20 17:03:54.745314] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.154 [2024-11-20 17:03:54.745327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.154 [2024-11-20 17:03:54.745336] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.154 [2024-11-20 17:03:54.745350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.154 "name": "Existed_Raid", 00:11:31.154 "uuid": "183a4f6b-db97-436c-8274-50c8017a5600", 00:11:31.154 "strip_size_kb": 0, 00:11:31.154 "state": "configuring", 00:11:31.154 "raid_level": "raid1", 00:11:31.154 "superblock": true, 00:11:31.154 "num_base_bdevs": 4, 00:11:31.154 "num_base_bdevs_discovered": 0, 00:11:31.154 "num_base_bdevs_operational": 4, 00:11:31.154 "base_bdevs_list": [ 00:11:31.154 { 00:11:31.154 "name": "BaseBdev1", 00:11:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.154 "is_configured": false, 00:11:31.154 "data_offset": 0, 00:11:31.154 "data_size": 0 00:11:31.154 }, 00:11:31.154 { 00:11:31.154 "name": "BaseBdev2", 00:11:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.154 "is_configured": false, 00:11:31.154 "data_offset": 0, 00:11:31.154 "data_size": 0 00:11:31.154 }, 00:11:31.154 { 00:11:31.154 "name": "BaseBdev3", 00:11:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.154 "is_configured": false, 00:11:31.154 "data_offset": 0, 00:11:31.154 "data_size": 0 00:11:31.154 }, 00:11:31.154 { 00:11:31.154 "name": "BaseBdev4", 00:11:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.154 "is_configured": false, 00:11:31.154 "data_offset": 0, 00:11:31.154 "data_size": 0 00:11:31.154 } 00:11:31.154 ] 00:11:31.154 }' 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.154 17:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.414 [2024-11-20 17:03:55.241231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.414 [2024-11-20 17:03:55.241293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.414 [2024-11-20 17:03:55.249232] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.414 [2024-11-20 17:03:55.249289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.414 [2024-11-20 17:03:55.249302] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.414 [2024-11-20 17:03:55.249316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.414 [2024-11-20 17:03:55.249326] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.414 [2024-11-20 17:03:55.249338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.414 [2024-11-20 17:03:55.249347] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.414 [2024-11-20 17:03:55.249360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.414 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 [2024-11-20 17:03:55.293009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.673 BaseBdev1 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 [ 00:11:31.673 { 00:11:31.673 "name": "BaseBdev1", 00:11:31.673 "aliases": [ 00:11:31.673 "54c7050d-80c4-40a0-8e45-79a542639fe7" 00:11:31.673 ], 00:11:31.673 "product_name": "Malloc disk", 00:11:31.673 "block_size": 512, 00:11:31.673 "num_blocks": 65536, 00:11:31.673 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:31.673 "assigned_rate_limits": { 00:11:31.673 "rw_ios_per_sec": 0, 00:11:31.673 "rw_mbytes_per_sec": 0, 00:11:31.673 "r_mbytes_per_sec": 0, 00:11:31.673 "w_mbytes_per_sec": 0 00:11:31.673 }, 00:11:31.673 "claimed": true, 00:11:31.673 "claim_type": "exclusive_write", 00:11:31.673 "zoned": false, 00:11:31.673 "supported_io_types": { 00:11:31.673 "read": true, 00:11:31.673 "write": true, 00:11:31.673 "unmap": true, 00:11:31.673 "flush": true, 00:11:31.673 "reset": true, 00:11:31.673 "nvme_admin": false, 00:11:31.673 "nvme_io": false, 00:11:31.673 "nvme_io_md": false, 00:11:31.673 "write_zeroes": true, 00:11:31.673 "zcopy": true, 00:11:31.673 "get_zone_info": false, 00:11:31.673 "zone_management": false, 00:11:31.673 "zone_append": false, 00:11:31.673 "compare": false, 00:11:31.673 "compare_and_write": false, 00:11:31.673 "abort": true, 00:11:31.673 "seek_hole": false, 00:11:31.673 "seek_data": false, 00:11:31.673 "copy": true, 00:11:31.673 "nvme_iov_md": false 00:11:31.673 }, 00:11:31.673 "memory_domains": [ 00:11:31.673 { 00:11:31.673 "dma_device_id": "system", 00:11:31.673 "dma_device_type": 1 00:11:31.673 }, 00:11:31.673 { 00:11:31.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.673 "dma_device_type": 2 00:11:31.673 } 00:11:31.673 ], 00:11:31.673 "driver_specific": {} 00:11:31.673 } 00:11:31.673 ] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.673 "name": "Existed_Raid", 00:11:31.673 "uuid": "9c3f7cbf-ddb3-4a33-9557-eb28a5401556", 00:11:31.673 "strip_size_kb": 0, 00:11:31.673 "state": "configuring", 00:11:31.673 "raid_level": "raid1", 00:11:31.673 "superblock": true, 00:11:31.673 "num_base_bdevs": 4, 00:11:31.673 "num_base_bdevs_discovered": 1, 00:11:31.673 "num_base_bdevs_operational": 4, 00:11:31.673 "base_bdevs_list": [ 00:11:31.673 { 00:11:31.673 "name": "BaseBdev1", 00:11:31.673 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:31.673 "is_configured": true, 00:11:31.673 "data_offset": 2048, 00:11:31.673 "data_size": 63488 00:11:31.673 }, 00:11:31.673 { 00:11:31.673 "name": "BaseBdev2", 00:11:31.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.673 "is_configured": false, 00:11:31.673 "data_offset": 0, 00:11:31.673 "data_size": 0 00:11:31.673 }, 00:11:31.673 { 00:11:31.673 "name": "BaseBdev3", 00:11:31.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.673 "is_configured": false, 00:11:31.673 "data_offset": 0, 00:11:31.673 "data_size": 0 00:11:31.673 }, 00:11:31.673 { 00:11:31.673 "name": "BaseBdev4", 00:11:31.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.673 "is_configured": false, 00:11:31.673 "data_offset": 0, 00:11:31.673 "data_size": 0 00:11:31.673 } 00:11:31.673 ] 00:11:31.673 }' 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.673 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 [2024-11-20 17:03:55.825311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.241 [2024-11-20 17:03:55.825415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 [2024-11-20 17:03:55.833321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.241 [2024-11-20 17:03:55.836134] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.241 [2024-11-20 17:03:55.836196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.241 [2024-11-20 17:03:55.836211] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.241 [2024-11-20 17:03:55.836227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.241 [2024-11-20 17:03:55.836236] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.241 [2024-11-20 17:03:55.836249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.241 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.242 "name": "Existed_Raid", 00:11:32.242 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:32.242 "strip_size_kb": 0, 00:11:32.242 "state": "configuring", 00:11:32.242 "raid_level": "raid1", 00:11:32.242 "superblock": true, 00:11:32.242 "num_base_bdevs": 4, 00:11:32.242 "num_base_bdevs_discovered": 1, 00:11:32.242 "num_base_bdevs_operational": 4, 00:11:32.242 "base_bdevs_list": [ 00:11:32.242 { 00:11:32.242 "name": "BaseBdev1", 00:11:32.242 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:32.242 "is_configured": true, 00:11:32.242 "data_offset": 2048, 00:11:32.242 "data_size": 63488 00:11:32.242 }, 00:11:32.242 { 00:11:32.242 "name": "BaseBdev2", 00:11:32.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.242 "is_configured": false, 00:11:32.242 "data_offset": 0, 00:11:32.242 "data_size": 0 00:11:32.242 }, 00:11:32.242 { 00:11:32.242 "name": "BaseBdev3", 00:11:32.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.242 "is_configured": false, 00:11:32.242 "data_offset": 0, 00:11:32.242 "data_size": 0 00:11:32.242 }, 00:11:32.242 { 00:11:32.242 "name": "BaseBdev4", 00:11:32.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.242 "is_configured": false, 00:11:32.242 "data_offset": 0, 00:11:32.242 "data_size": 0 00:11:32.242 } 00:11:32.242 ] 00:11:32.242 }' 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.242 17:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.500 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.500 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.500 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.759 [2024-11-20 17:03:56.403384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.759 BaseBdev2 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.759 [ 00:11:32.759 { 00:11:32.759 "name": "BaseBdev2", 00:11:32.759 "aliases": [ 00:11:32.759 "da2dc133-1669-4243-b9cd-2bc94c506a6a" 00:11:32.759 ], 00:11:32.759 "product_name": "Malloc disk", 00:11:32.759 "block_size": 512, 00:11:32.759 "num_blocks": 65536, 00:11:32.759 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:32.759 "assigned_rate_limits": { 00:11:32.759 "rw_ios_per_sec": 0, 00:11:32.759 "rw_mbytes_per_sec": 0, 00:11:32.759 "r_mbytes_per_sec": 0, 00:11:32.759 "w_mbytes_per_sec": 0 00:11:32.759 }, 00:11:32.759 "claimed": true, 00:11:32.759 "claim_type": "exclusive_write", 00:11:32.759 "zoned": false, 00:11:32.759 "supported_io_types": { 00:11:32.759 "read": true, 00:11:32.759 "write": true, 00:11:32.759 "unmap": true, 00:11:32.759 "flush": true, 00:11:32.759 "reset": true, 00:11:32.759 "nvme_admin": false, 00:11:32.759 "nvme_io": false, 00:11:32.759 "nvme_io_md": false, 00:11:32.759 "write_zeroes": true, 00:11:32.759 "zcopy": true, 00:11:32.759 "get_zone_info": false, 00:11:32.759 "zone_management": false, 00:11:32.759 "zone_append": false, 00:11:32.759 "compare": false, 00:11:32.759 "compare_and_write": false, 00:11:32.759 "abort": true, 00:11:32.759 "seek_hole": false, 00:11:32.759 "seek_data": false, 00:11:32.759 "copy": true, 00:11:32.759 "nvme_iov_md": false 00:11:32.759 }, 00:11:32.759 "memory_domains": [ 00:11:32.759 { 00:11:32.759 "dma_device_id": "system", 00:11:32.759 "dma_device_type": 1 00:11:32.759 }, 00:11:32.759 { 00:11:32.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.759 "dma_device_type": 2 00:11:32.759 } 00:11:32.759 ], 00:11:32.759 "driver_specific": {} 00:11:32.759 } 00:11:32.759 ] 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.759 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.760 "name": "Existed_Raid", 00:11:32.760 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:32.760 "strip_size_kb": 0, 00:11:32.760 "state": "configuring", 00:11:32.760 "raid_level": "raid1", 00:11:32.760 "superblock": true, 00:11:32.760 "num_base_bdevs": 4, 00:11:32.760 "num_base_bdevs_discovered": 2, 00:11:32.760 "num_base_bdevs_operational": 4, 00:11:32.760 "base_bdevs_list": [ 00:11:32.760 { 00:11:32.760 "name": "BaseBdev1", 00:11:32.760 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:32.760 "is_configured": true, 00:11:32.760 "data_offset": 2048, 00:11:32.760 "data_size": 63488 00:11:32.760 }, 00:11:32.760 { 00:11:32.760 "name": "BaseBdev2", 00:11:32.760 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:32.760 "is_configured": true, 00:11:32.760 "data_offset": 2048, 00:11:32.760 "data_size": 63488 00:11:32.760 }, 00:11:32.760 { 00:11:32.760 "name": "BaseBdev3", 00:11:32.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.760 "is_configured": false, 00:11:32.760 "data_offset": 0, 00:11:32.760 "data_size": 0 00:11:32.760 }, 00:11:32.760 { 00:11:32.760 "name": "BaseBdev4", 00:11:32.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.760 "is_configured": false, 00:11:32.760 "data_offset": 0, 00:11:32.760 "data_size": 0 00:11:32.760 } 00:11:32.760 ] 00:11:32.760 }' 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.760 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 [2024-11-20 17:03:56.986638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.328 BaseBdev3 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.328 17:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 [ 00:11:33.328 { 00:11:33.328 "name": "BaseBdev3", 00:11:33.328 "aliases": [ 00:11:33.328 "e57f2a49-0aba-4958-8c23-21d85d76ba82" 00:11:33.328 ], 00:11:33.328 "product_name": "Malloc disk", 00:11:33.328 "block_size": 512, 00:11:33.328 "num_blocks": 65536, 00:11:33.328 "uuid": "e57f2a49-0aba-4958-8c23-21d85d76ba82", 00:11:33.328 "assigned_rate_limits": { 00:11:33.328 "rw_ios_per_sec": 0, 00:11:33.328 "rw_mbytes_per_sec": 0, 00:11:33.328 "r_mbytes_per_sec": 0, 00:11:33.328 "w_mbytes_per_sec": 0 00:11:33.328 }, 00:11:33.328 "claimed": true, 00:11:33.328 "claim_type": "exclusive_write", 00:11:33.328 "zoned": false, 00:11:33.328 "supported_io_types": { 00:11:33.328 "read": true, 00:11:33.328 "write": true, 00:11:33.328 "unmap": true, 00:11:33.328 "flush": true, 00:11:33.328 "reset": true, 00:11:33.328 "nvme_admin": false, 00:11:33.328 "nvme_io": false, 00:11:33.328 "nvme_io_md": false, 00:11:33.328 "write_zeroes": true, 00:11:33.328 "zcopy": true, 00:11:33.328 "get_zone_info": false, 00:11:33.328 "zone_management": false, 00:11:33.328 "zone_append": false, 00:11:33.328 "compare": false, 00:11:33.328 "compare_and_write": false, 00:11:33.328 "abort": true, 00:11:33.328 "seek_hole": false, 00:11:33.328 "seek_data": false, 00:11:33.328 "copy": true, 00:11:33.328 "nvme_iov_md": false 00:11:33.328 }, 00:11:33.328 "memory_domains": [ 00:11:33.328 { 00:11:33.328 "dma_device_id": "system", 00:11:33.328 "dma_device_type": 1 00:11:33.328 }, 00:11:33.328 { 00:11:33.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.328 "dma_device_type": 2 00:11:33.328 } 00:11:33.328 ], 00:11:33.328 "driver_specific": {} 00:11:33.328 } 00:11:33.328 ] 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.328 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.328 "name": "Existed_Raid", 00:11:33.328 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:33.328 "strip_size_kb": 0, 00:11:33.328 "state": "configuring", 00:11:33.328 "raid_level": "raid1", 00:11:33.328 "superblock": true, 00:11:33.328 "num_base_bdevs": 4, 00:11:33.329 "num_base_bdevs_discovered": 3, 00:11:33.329 "num_base_bdevs_operational": 4, 00:11:33.329 "base_bdevs_list": [ 00:11:33.329 { 00:11:33.329 "name": "BaseBdev1", 00:11:33.329 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:33.329 "is_configured": true, 00:11:33.329 "data_offset": 2048, 00:11:33.329 "data_size": 63488 00:11:33.329 }, 00:11:33.329 { 00:11:33.329 "name": "BaseBdev2", 00:11:33.329 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:33.329 "is_configured": true, 00:11:33.329 "data_offset": 2048, 00:11:33.329 "data_size": 63488 00:11:33.329 }, 00:11:33.329 { 00:11:33.329 "name": "BaseBdev3", 00:11:33.329 "uuid": "e57f2a49-0aba-4958-8c23-21d85d76ba82", 00:11:33.329 "is_configured": true, 00:11:33.329 "data_offset": 2048, 00:11:33.329 "data_size": 63488 00:11:33.329 }, 00:11:33.329 { 00:11:33.329 "name": "BaseBdev4", 00:11:33.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.329 "is_configured": false, 00:11:33.329 "data_offset": 0, 00:11:33.329 "data_size": 0 00:11:33.329 } 00:11:33.329 ] 00:11:33.329 }' 00:11:33.329 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.329 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 [2024-11-20 17:03:57.551742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.897 [2024-11-20 17:03:57.552194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.897 [2024-11-20 17:03:57.552244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.897 BaseBdev4 00:11:33.897 [2024-11-20 17:03:57.552597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.897 [2024-11-20 17:03:57.552869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.897 [2024-11-20 17:03:57.552898] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 [2024-11-20 17:03:57.553113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 [ 00:11:33.897 { 00:11:33.897 "name": "BaseBdev4", 00:11:33.897 "aliases": [ 00:11:33.897 "13388747-9a73-460b-965c-11a1902d5531" 00:11:33.897 ], 00:11:33.897 "product_name": "Malloc disk", 00:11:33.897 "block_size": 512, 00:11:33.897 "num_blocks": 65536, 00:11:33.897 "uuid": "13388747-9a73-460b-965c-11a1902d5531", 00:11:33.897 "assigned_rate_limits": { 00:11:33.897 "rw_ios_per_sec": 0, 00:11:33.897 "rw_mbytes_per_sec": 0, 00:11:33.897 "r_mbytes_per_sec": 0, 00:11:33.897 "w_mbytes_per_sec": 0 00:11:33.897 }, 00:11:33.897 "claimed": true, 00:11:33.897 "claim_type": "exclusive_write", 00:11:33.897 "zoned": false, 00:11:33.897 "supported_io_types": { 00:11:33.897 "read": true, 00:11:33.897 "write": true, 00:11:33.897 "unmap": true, 00:11:33.897 "flush": true, 00:11:33.897 "reset": true, 00:11:33.897 "nvme_admin": false, 00:11:33.897 "nvme_io": false, 00:11:33.897 "nvme_io_md": false, 00:11:33.897 "write_zeroes": true, 00:11:33.897 "zcopy": true, 00:11:33.897 "get_zone_info": false, 00:11:33.897 "zone_management": false, 00:11:33.897 "zone_append": false, 00:11:33.897 "compare": false, 00:11:33.897 "compare_and_write": false, 00:11:33.897 "abort": true, 00:11:33.897 "seek_hole": false, 00:11:33.897 "seek_data": false, 00:11:33.897 "copy": true, 00:11:33.897 "nvme_iov_md": false 00:11:33.897 }, 00:11:33.897 "memory_domains": [ 00:11:33.897 { 00:11:33.897 "dma_device_id": "system", 00:11:33.897 "dma_device_type": 1 00:11:33.897 }, 00:11:33.897 { 00:11:33.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.897 "dma_device_type": 2 00:11:33.897 } 00:11:33.897 ], 00:11:33.897 "driver_specific": {} 00:11:33.897 } 00:11:33.897 ] 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.897 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.897 "name": "Existed_Raid", 00:11:33.897 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:33.897 "strip_size_kb": 0, 00:11:33.897 "state": "online", 00:11:33.897 "raid_level": "raid1", 00:11:33.897 "superblock": true, 00:11:33.897 "num_base_bdevs": 4, 00:11:33.897 "num_base_bdevs_discovered": 4, 00:11:33.897 "num_base_bdevs_operational": 4, 00:11:33.897 "base_bdevs_list": [ 00:11:33.897 { 00:11:33.897 "name": "BaseBdev1", 00:11:33.897 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:33.897 "is_configured": true, 00:11:33.897 "data_offset": 2048, 00:11:33.897 "data_size": 63488 00:11:33.897 }, 00:11:33.897 { 00:11:33.897 "name": "BaseBdev2", 00:11:33.897 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:33.897 "is_configured": true, 00:11:33.897 "data_offset": 2048, 00:11:33.897 "data_size": 63488 00:11:33.897 }, 00:11:33.897 { 00:11:33.897 "name": "BaseBdev3", 00:11:33.897 "uuid": "e57f2a49-0aba-4958-8c23-21d85d76ba82", 00:11:33.897 "is_configured": true, 00:11:33.897 "data_offset": 2048, 00:11:33.897 "data_size": 63488 00:11:33.897 }, 00:11:33.897 { 00:11:33.898 "name": "BaseBdev4", 00:11:33.898 "uuid": "13388747-9a73-460b-965c-11a1902d5531", 00:11:33.898 "is_configured": true, 00:11:33.898 "data_offset": 2048, 00:11:33.898 "data_size": 63488 00:11:33.898 } 00:11:33.898 ] 00:11:33.898 }' 00:11:33.898 17:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.898 17:03:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.465 [2024-11-20 17:03:58.108417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.465 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.465 "name": "Existed_Raid", 00:11:34.465 "aliases": [ 00:11:34.465 "2e88652e-51f8-471f-877f-d1c9fc341b03" 00:11:34.465 ], 00:11:34.465 "product_name": "Raid Volume", 00:11:34.465 "block_size": 512, 00:11:34.465 "num_blocks": 63488, 00:11:34.465 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:34.465 "assigned_rate_limits": { 00:11:34.465 "rw_ios_per_sec": 0, 00:11:34.465 "rw_mbytes_per_sec": 0, 00:11:34.465 "r_mbytes_per_sec": 0, 00:11:34.465 "w_mbytes_per_sec": 0 00:11:34.465 }, 00:11:34.465 "claimed": false, 00:11:34.465 "zoned": false, 00:11:34.465 "supported_io_types": { 00:11:34.465 "read": true, 00:11:34.465 "write": true, 00:11:34.465 "unmap": false, 00:11:34.465 "flush": false, 00:11:34.465 "reset": true, 00:11:34.465 "nvme_admin": false, 00:11:34.465 "nvme_io": false, 00:11:34.465 "nvme_io_md": false, 00:11:34.465 "write_zeroes": true, 00:11:34.465 "zcopy": false, 00:11:34.465 "get_zone_info": false, 00:11:34.465 "zone_management": false, 00:11:34.465 "zone_append": false, 00:11:34.465 "compare": false, 00:11:34.465 "compare_and_write": false, 00:11:34.465 "abort": false, 00:11:34.465 "seek_hole": false, 00:11:34.465 "seek_data": false, 00:11:34.465 "copy": false, 00:11:34.465 "nvme_iov_md": false 00:11:34.465 }, 00:11:34.465 "memory_domains": [ 00:11:34.465 { 00:11:34.465 "dma_device_id": "system", 00:11:34.465 "dma_device_type": 1 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.465 "dma_device_type": 2 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "system", 00:11:34.465 "dma_device_type": 1 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.465 "dma_device_type": 2 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "system", 00:11:34.465 "dma_device_type": 1 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.465 "dma_device_type": 2 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "system", 00:11:34.465 "dma_device_type": 1 00:11:34.465 }, 00:11:34.465 { 00:11:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.465 "dma_device_type": 2 00:11:34.465 } 00:11:34.465 ], 00:11:34.465 "driver_specific": { 00:11:34.465 "raid": { 00:11:34.466 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:34.466 "strip_size_kb": 0, 00:11:34.466 "state": "online", 00:11:34.466 "raid_level": "raid1", 00:11:34.466 "superblock": true, 00:11:34.466 "num_base_bdevs": 4, 00:11:34.466 "num_base_bdevs_discovered": 4, 00:11:34.466 "num_base_bdevs_operational": 4, 00:11:34.466 "base_bdevs_list": [ 00:11:34.466 { 00:11:34.466 "name": "BaseBdev1", 00:11:34.466 "uuid": "54c7050d-80c4-40a0-8e45-79a542639fe7", 00:11:34.466 "is_configured": true, 00:11:34.466 "data_offset": 2048, 00:11:34.466 "data_size": 63488 00:11:34.466 }, 00:11:34.466 { 00:11:34.466 "name": "BaseBdev2", 00:11:34.466 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:34.466 "is_configured": true, 00:11:34.466 "data_offset": 2048, 00:11:34.466 "data_size": 63488 00:11:34.466 }, 00:11:34.466 { 00:11:34.466 "name": "BaseBdev3", 00:11:34.466 "uuid": "e57f2a49-0aba-4958-8c23-21d85d76ba82", 00:11:34.466 "is_configured": true, 00:11:34.466 "data_offset": 2048, 00:11:34.466 "data_size": 63488 00:11:34.466 }, 00:11:34.466 { 00:11:34.466 "name": "BaseBdev4", 00:11:34.466 "uuid": "13388747-9a73-460b-965c-11a1902d5531", 00:11:34.466 "is_configured": true, 00:11:34.466 "data_offset": 2048, 00:11:34.466 "data_size": 63488 00:11:34.466 } 00:11:34.466 ] 00:11:34.466 } 00:11:34.466 } 00:11:34.466 }' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:34.466 BaseBdev2 00:11:34.466 BaseBdev3 00:11:34.466 BaseBdev4' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.724 [2024-11-20 17:03:58.464248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.724 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.725 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.995 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.995 "name": "Existed_Raid", 00:11:34.995 "uuid": "2e88652e-51f8-471f-877f-d1c9fc341b03", 00:11:34.995 "strip_size_kb": 0, 00:11:34.995 "state": "online", 00:11:34.995 "raid_level": "raid1", 00:11:34.995 "superblock": true, 00:11:34.995 "num_base_bdevs": 4, 00:11:34.995 "num_base_bdevs_discovered": 3, 00:11:34.995 "num_base_bdevs_operational": 3, 00:11:34.995 "base_bdevs_list": [ 00:11:34.995 { 00:11:34.995 "name": null, 00:11:34.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.995 "is_configured": false, 00:11:34.995 "data_offset": 0, 00:11:34.995 "data_size": 63488 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "name": "BaseBdev2", 00:11:34.995 "uuid": "da2dc133-1669-4243-b9cd-2bc94c506a6a", 00:11:34.995 "is_configured": true, 00:11:34.995 "data_offset": 2048, 00:11:34.995 "data_size": 63488 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "name": "BaseBdev3", 00:11:34.995 "uuid": "e57f2a49-0aba-4958-8c23-21d85d76ba82", 00:11:34.995 "is_configured": true, 00:11:34.995 "data_offset": 2048, 00:11:34.995 "data_size": 63488 00:11:34.995 }, 00:11:34.995 { 00:11:34.995 "name": "BaseBdev4", 00:11:34.995 "uuid": "13388747-9a73-460b-965c-11a1902d5531", 00:11:34.995 "is_configured": true, 00:11:34.995 "data_offset": 2048, 00:11:34.995 "data_size": 63488 00:11:34.995 } 00:11:34.995 ] 00:11:34.995 }' 00:11:34.995 17:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.995 17:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.270 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.270 [2024-11-20 17:03:59.102279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 [2024-11-20 17:03:59.252600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.528 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.528 [2024-11-20 17:03:59.389629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:35.528 [2024-11-20 17:03:59.389797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.787 [2024-11-20 17:03:59.467136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.787 [2024-11-20 17:03:59.467216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.787 [2024-11-20 17:03:59.467235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.787 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 BaseBdev2 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 [ 00:11:35.788 { 00:11:35.788 "name": "BaseBdev2", 00:11:35.788 "aliases": [ 00:11:35.788 "681f6936-4fc6-4226-9337-51382714d912" 00:11:35.788 ], 00:11:35.788 "product_name": "Malloc disk", 00:11:35.788 "block_size": 512, 00:11:35.788 "num_blocks": 65536, 00:11:35.788 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:35.788 "assigned_rate_limits": { 00:11:35.788 "rw_ios_per_sec": 0, 00:11:35.788 "rw_mbytes_per_sec": 0, 00:11:35.788 "r_mbytes_per_sec": 0, 00:11:35.788 "w_mbytes_per_sec": 0 00:11:35.788 }, 00:11:35.788 "claimed": false, 00:11:35.788 "zoned": false, 00:11:35.788 "supported_io_types": { 00:11:35.788 "read": true, 00:11:35.788 "write": true, 00:11:35.788 "unmap": true, 00:11:35.788 "flush": true, 00:11:35.788 "reset": true, 00:11:35.788 "nvme_admin": false, 00:11:35.788 "nvme_io": false, 00:11:35.788 "nvme_io_md": false, 00:11:35.788 "write_zeroes": true, 00:11:35.788 "zcopy": true, 00:11:35.788 "get_zone_info": false, 00:11:35.788 "zone_management": false, 00:11:35.788 "zone_append": false, 00:11:35.788 "compare": false, 00:11:35.788 "compare_and_write": false, 00:11:35.788 "abort": true, 00:11:35.788 "seek_hole": false, 00:11:35.788 "seek_data": false, 00:11:35.788 "copy": true, 00:11:35.788 "nvme_iov_md": false 00:11:35.788 }, 00:11:35.788 "memory_domains": [ 00:11:35.788 { 00:11:35.788 "dma_device_id": "system", 00:11:35.788 "dma_device_type": 1 00:11:35.788 }, 00:11:35.788 { 00:11:35.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.788 "dma_device_type": 2 00:11:35.788 } 00:11:35.788 ], 00:11:35.788 "driver_specific": {} 00:11:35.788 } 00:11:35.788 ] 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.788 BaseBdev3 00:11:35.788 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 [ 00:11:36.047 { 00:11:36.047 "name": "BaseBdev3", 00:11:36.047 "aliases": [ 00:11:36.047 "c3936987-93dd-4be2-a6df-002efc2e727e" 00:11:36.047 ], 00:11:36.047 "product_name": "Malloc disk", 00:11:36.047 "block_size": 512, 00:11:36.047 "num_blocks": 65536, 00:11:36.047 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:36.047 "assigned_rate_limits": { 00:11:36.047 "rw_ios_per_sec": 0, 00:11:36.047 "rw_mbytes_per_sec": 0, 00:11:36.047 "r_mbytes_per_sec": 0, 00:11:36.047 "w_mbytes_per_sec": 0 00:11:36.047 }, 00:11:36.047 "claimed": false, 00:11:36.047 "zoned": false, 00:11:36.047 "supported_io_types": { 00:11:36.047 "read": true, 00:11:36.047 "write": true, 00:11:36.047 "unmap": true, 00:11:36.047 "flush": true, 00:11:36.047 "reset": true, 00:11:36.047 "nvme_admin": false, 00:11:36.047 "nvme_io": false, 00:11:36.047 "nvme_io_md": false, 00:11:36.047 "write_zeroes": true, 00:11:36.047 "zcopy": true, 00:11:36.047 "get_zone_info": false, 00:11:36.047 "zone_management": false, 00:11:36.047 "zone_append": false, 00:11:36.047 "compare": false, 00:11:36.047 "compare_and_write": false, 00:11:36.047 "abort": true, 00:11:36.047 "seek_hole": false, 00:11:36.047 "seek_data": false, 00:11:36.047 "copy": true, 00:11:36.047 "nvme_iov_md": false 00:11:36.047 }, 00:11:36.047 "memory_domains": [ 00:11:36.047 { 00:11:36.047 "dma_device_id": "system", 00:11:36.047 "dma_device_type": 1 00:11:36.047 }, 00:11:36.047 { 00:11:36.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.047 "dma_device_type": 2 00:11:36.047 } 00:11:36.047 ], 00:11:36.047 "driver_specific": {} 00:11:36.047 } 00:11:36.047 ] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 BaseBdev4 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 [ 00:11:36.047 { 00:11:36.047 "name": "BaseBdev4", 00:11:36.047 "aliases": [ 00:11:36.047 "c2eea63e-545e-4688-bfc8-3d46b6ad63c4" 00:11:36.047 ], 00:11:36.047 "product_name": "Malloc disk", 00:11:36.047 "block_size": 512, 00:11:36.047 "num_blocks": 65536, 00:11:36.047 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:36.047 "assigned_rate_limits": { 00:11:36.047 "rw_ios_per_sec": 0, 00:11:36.047 "rw_mbytes_per_sec": 0, 00:11:36.047 "r_mbytes_per_sec": 0, 00:11:36.047 "w_mbytes_per_sec": 0 00:11:36.047 }, 00:11:36.047 "claimed": false, 00:11:36.047 "zoned": false, 00:11:36.047 "supported_io_types": { 00:11:36.047 "read": true, 00:11:36.047 "write": true, 00:11:36.047 "unmap": true, 00:11:36.047 "flush": true, 00:11:36.047 "reset": true, 00:11:36.047 "nvme_admin": false, 00:11:36.047 "nvme_io": false, 00:11:36.047 "nvme_io_md": false, 00:11:36.047 "write_zeroes": true, 00:11:36.047 "zcopy": true, 00:11:36.047 "get_zone_info": false, 00:11:36.047 "zone_management": false, 00:11:36.047 "zone_append": false, 00:11:36.047 "compare": false, 00:11:36.047 "compare_and_write": false, 00:11:36.047 "abort": true, 00:11:36.047 "seek_hole": false, 00:11:36.047 "seek_data": false, 00:11:36.047 "copy": true, 00:11:36.047 "nvme_iov_md": false 00:11:36.047 }, 00:11:36.047 "memory_domains": [ 00:11:36.047 { 00:11:36.047 "dma_device_id": "system", 00:11:36.047 "dma_device_type": 1 00:11:36.047 }, 00:11:36.047 { 00:11:36.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.047 "dma_device_type": 2 00:11:36.047 } 00:11:36.047 ], 00:11:36.047 "driver_specific": {} 00:11:36.047 } 00:11:36.047 ] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.047 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.047 [2024-11-20 17:03:59.756281] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.047 [2024-11-20 17:03:59.756355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.047 [2024-11-20 17:03:59.756388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.047 [2024-11-20 17:03:59.758844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.048 [2024-11-20 17:03:59.758940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.048 "name": "Existed_Raid", 00:11:36.048 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:36.048 "strip_size_kb": 0, 00:11:36.048 "state": "configuring", 00:11:36.048 "raid_level": "raid1", 00:11:36.048 "superblock": true, 00:11:36.048 "num_base_bdevs": 4, 00:11:36.048 "num_base_bdevs_discovered": 3, 00:11:36.048 "num_base_bdevs_operational": 4, 00:11:36.048 "base_bdevs_list": [ 00:11:36.048 { 00:11:36.048 "name": "BaseBdev1", 00:11:36.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.048 "is_configured": false, 00:11:36.048 "data_offset": 0, 00:11:36.048 "data_size": 0 00:11:36.048 }, 00:11:36.048 { 00:11:36.048 "name": "BaseBdev2", 00:11:36.048 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:36.048 "is_configured": true, 00:11:36.048 "data_offset": 2048, 00:11:36.048 "data_size": 63488 00:11:36.048 }, 00:11:36.048 { 00:11:36.048 "name": "BaseBdev3", 00:11:36.048 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:36.048 "is_configured": true, 00:11:36.048 "data_offset": 2048, 00:11:36.048 "data_size": 63488 00:11:36.048 }, 00:11:36.048 { 00:11:36.048 "name": "BaseBdev4", 00:11:36.048 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:36.048 "is_configured": true, 00:11:36.048 "data_offset": 2048, 00:11:36.048 "data_size": 63488 00:11:36.048 } 00:11:36.048 ] 00:11:36.048 }' 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.048 17:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.614 [2024-11-20 17:04:00.272422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.614 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.615 "name": "Existed_Raid", 00:11:36.615 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:36.615 "strip_size_kb": 0, 00:11:36.615 "state": "configuring", 00:11:36.615 "raid_level": "raid1", 00:11:36.615 "superblock": true, 00:11:36.615 "num_base_bdevs": 4, 00:11:36.615 "num_base_bdevs_discovered": 2, 00:11:36.615 "num_base_bdevs_operational": 4, 00:11:36.615 "base_bdevs_list": [ 00:11:36.615 { 00:11:36.615 "name": "BaseBdev1", 00:11:36.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.615 "is_configured": false, 00:11:36.615 "data_offset": 0, 00:11:36.615 "data_size": 0 00:11:36.615 }, 00:11:36.615 { 00:11:36.615 "name": null, 00:11:36.615 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:36.615 "is_configured": false, 00:11:36.615 "data_offset": 0, 00:11:36.615 "data_size": 63488 00:11:36.615 }, 00:11:36.615 { 00:11:36.615 "name": "BaseBdev3", 00:11:36.615 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:36.615 "is_configured": true, 00:11:36.615 "data_offset": 2048, 00:11:36.615 "data_size": 63488 00:11:36.615 }, 00:11:36.615 { 00:11:36.615 "name": "BaseBdev4", 00:11:36.615 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:36.615 "is_configured": true, 00:11:36.615 "data_offset": 2048, 00:11:36.615 "data_size": 63488 00:11:36.615 } 00:11:36.615 ] 00:11:36.615 }' 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.615 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.182 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.182 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.182 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.182 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 [2024-11-20 17:04:00.902526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.183 BaseBdev1 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 [ 00:11:37.183 { 00:11:37.183 "name": "BaseBdev1", 00:11:37.183 "aliases": [ 00:11:37.183 "e191d196-15c1-4a2a-a1d4-8220ba393abd" 00:11:37.183 ], 00:11:37.183 "product_name": "Malloc disk", 00:11:37.183 "block_size": 512, 00:11:37.183 "num_blocks": 65536, 00:11:37.183 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:37.183 "assigned_rate_limits": { 00:11:37.183 "rw_ios_per_sec": 0, 00:11:37.183 "rw_mbytes_per_sec": 0, 00:11:37.183 "r_mbytes_per_sec": 0, 00:11:37.183 "w_mbytes_per_sec": 0 00:11:37.183 }, 00:11:37.183 "claimed": true, 00:11:37.183 "claim_type": "exclusive_write", 00:11:37.183 "zoned": false, 00:11:37.183 "supported_io_types": { 00:11:37.183 "read": true, 00:11:37.183 "write": true, 00:11:37.183 "unmap": true, 00:11:37.183 "flush": true, 00:11:37.183 "reset": true, 00:11:37.183 "nvme_admin": false, 00:11:37.183 "nvme_io": false, 00:11:37.183 "nvme_io_md": false, 00:11:37.183 "write_zeroes": true, 00:11:37.183 "zcopy": true, 00:11:37.183 "get_zone_info": false, 00:11:37.183 "zone_management": false, 00:11:37.183 "zone_append": false, 00:11:37.183 "compare": false, 00:11:37.183 "compare_and_write": false, 00:11:37.183 "abort": true, 00:11:37.183 "seek_hole": false, 00:11:37.183 "seek_data": false, 00:11:37.183 "copy": true, 00:11:37.183 "nvme_iov_md": false 00:11:37.183 }, 00:11:37.183 "memory_domains": [ 00:11:37.183 { 00:11:37.183 "dma_device_id": "system", 00:11:37.183 "dma_device_type": 1 00:11:37.183 }, 00:11:37.183 { 00:11:37.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.183 "dma_device_type": 2 00:11:37.183 } 00:11:37.183 ], 00:11:37.183 "driver_specific": {} 00:11:37.183 } 00:11:37.183 ] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.183 "name": "Existed_Raid", 00:11:37.183 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:37.183 "strip_size_kb": 0, 00:11:37.183 "state": "configuring", 00:11:37.183 "raid_level": "raid1", 00:11:37.183 "superblock": true, 00:11:37.183 "num_base_bdevs": 4, 00:11:37.183 "num_base_bdevs_discovered": 3, 00:11:37.183 "num_base_bdevs_operational": 4, 00:11:37.183 "base_bdevs_list": [ 00:11:37.183 { 00:11:37.183 "name": "BaseBdev1", 00:11:37.183 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:37.183 "is_configured": true, 00:11:37.183 "data_offset": 2048, 00:11:37.183 "data_size": 63488 00:11:37.183 }, 00:11:37.183 { 00:11:37.183 "name": null, 00:11:37.183 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:37.183 "is_configured": false, 00:11:37.183 "data_offset": 0, 00:11:37.183 "data_size": 63488 00:11:37.183 }, 00:11:37.183 { 00:11:37.183 "name": "BaseBdev3", 00:11:37.183 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:37.183 "is_configured": true, 00:11:37.183 "data_offset": 2048, 00:11:37.183 "data_size": 63488 00:11:37.183 }, 00:11:37.183 { 00:11:37.183 "name": "BaseBdev4", 00:11:37.183 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:37.183 "is_configured": true, 00:11:37.183 "data_offset": 2048, 00:11:37.183 "data_size": 63488 00:11:37.183 } 00:11:37.183 ] 00:11:37.183 }' 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.183 17:04:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.749 [2024-11-20 17:04:01.506752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.749 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.749 "name": "Existed_Raid", 00:11:37.749 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:37.749 "strip_size_kb": 0, 00:11:37.749 "state": "configuring", 00:11:37.749 "raid_level": "raid1", 00:11:37.749 "superblock": true, 00:11:37.749 "num_base_bdevs": 4, 00:11:37.749 "num_base_bdevs_discovered": 2, 00:11:37.749 "num_base_bdevs_operational": 4, 00:11:37.749 "base_bdevs_list": [ 00:11:37.749 { 00:11:37.749 "name": "BaseBdev1", 00:11:37.749 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:37.749 "is_configured": true, 00:11:37.749 "data_offset": 2048, 00:11:37.749 "data_size": 63488 00:11:37.749 }, 00:11:37.749 { 00:11:37.749 "name": null, 00:11:37.749 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:37.749 "is_configured": false, 00:11:37.749 "data_offset": 0, 00:11:37.749 "data_size": 63488 00:11:37.749 }, 00:11:37.749 { 00:11:37.749 "name": null, 00:11:37.749 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:37.749 "is_configured": false, 00:11:37.749 "data_offset": 0, 00:11:37.749 "data_size": 63488 00:11:37.750 }, 00:11:37.750 { 00:11:37.750 "name": "BaseBdev4", 00:11:37.750 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:37.750 "is_configured": true, 00:11:37.750 "data_offset": 2048, 00:11:37.750 "data_size": 63488 00:11:37.750 } 00:11:37.750 ] 00:11:37.750 }' 00:11:37.750 17:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.750 17:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.316 [2024-11-20 17:04:02.066898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.316 "name": "Existed_Raid", 00:11:38.316 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:38.316 "strip_size_kb": 0, 00:11:38.316 "state": "configuring", 00:11:38.316 "raid_level": "raid1", 00:11:38.316 "superblock": true, 00:11:38.316 "num_base_bdevs": 4, 00:11:38.316 "num_base_bdevs_discovered": 3, 00:11:38.316 "num_base_bdevs_operational": 4, 00:11:38.316 "base_bdevs_list": [ 00:11:38.316 { 00:11:38.316 "name": "BaseBdev1", 00:11:38.316 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:38.316 "is_configured": true, 00:11:38.316 "data_offset": 2048, 00:11:38.316 "data_size": 63488 00:11:38.316 }, 00:11:38.316 { 00:11:38.316 "name": null, 00:11:38.316 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:38.316 "is_configured": false, 00:11:38.316 "data_offset": 0, 00:11:38.316 "data_size": 63488 00:11:38.316 }, 00:11:38.316 { 00:11:38.316 "name": "BaseBdev3", 00:11:38.316 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:38.316 "is_configured": true, 00:11:38.316 "data_offset": 2048, 00:11:38.316 "data_size": 63488 00:11:38.316 }, 00:11:38.316 { 00:11:38.316 "name": "BaseBdev4", 00:11:38.316 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:38.316 "is_configured": true, 00:11:38.316 "data_offset": 2048, 00:11:38.316 "data_size": 63488 00:11:38.316 } 00:11:38.316 ] 00:11:38.316 }' 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.316 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.882 [2024-11-20 17:04:02.671125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.882 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.883 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.883 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.141 "name": "Existed_Raid", 00:11:39.141 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:39.141 "strip_size_kb": 0, 00:11:39.141 "state": "configuring", 00:11:39.141 "raid_level": "raid1", 00:11:39.141 "superblock": true, 00:11:39.141 "num_base_bdevs": 4, 00:11:39.141 "num_base_bdevs_discovered": 2, 00:11:39.141 "num_base_bdevs_operational": 4, 00:11:39.141 "base_bdevs_list": [ 00:11:39.141 { 00:11:39.141 "name": null, 00:11:39.141 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:39.141 "is_configured": false, 00:11:39.141 "data_offset": 0, 00:11:39.141 "data_size": 63488 00:11:39.141 }, 00:11:39.141 { 00:11:39.141 "name": null, 00:11:39.141 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:39.141 "is_configured": false, 00:11:39.141 "data_offset": 0, 00:11:39.141 "data_size": 63488 00:11:39.141 }, 00:11:39.141 { 00:11:39.141 "name": "BaseBdev3", 00:11:39.141 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:39.141 "is_configured": true, 00:11:39.141 "data_offset": 2048, 00:11:39.141 "data_size": 63488 00:11:39.141 }, 00:11:39.141 { 00:11:39.141 "name": "BaseBdev4", 00:11:39.141 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:39.141 "is_configured": true, 00:11:39.141 "data_offset": 2048, 00:11:39.141 "data_size": 63488 00:11:39.141 } 00:11:39.141 ] 00:11:39.141 }' 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.141 17:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.400 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.400 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.400 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.400 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.658 [2024-11-20 17:04:03.315482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.658 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.659 "name": "Existed_Raid", 00:11:39.659 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:39.659 "strip_size_kb": 0, 00:11:39.659 "state": "configuring", 00:11:39.659 "raid_level": "raid1", 00:11:39.659 "superblock": true, 00:11:39.659 "num_base_bdevs": 4, 00:11:39.659 "num_base_bdevs_discovered": 3, 00:11:39.659 "num_base_bdevs_operational": 4, 00:11:39.659 "base_bdevs_list": [ 00:11:39.659 { 00:11:39.659 "name": null, 00:11:39.659 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:39.659 "is_configured": false, 00:11:39.659 "data_offset": 0, 00:11:39.659 "data_size": 63488 00:11:39.659 }, 00:11:39.659 { 00:11:39.659 "name": "BaseBdev2", 00:11:39.659 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:39.659 "is_configured": true, 00:11:39.659 "data_offset": 2048, 00:11:39.659 "data_size": 63488 00:11:39.659 }, 00:11:39.659 { 00:11:39.659 "name": "BaseBdev3", 00:11:39.659 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:39.659 "is_configured": true, 00:11:39.659 "data_offset": 2048, 00:11:39.659 "data_size": 63488 00:11:39.659 }, 00:11:39.659 { 00:11:39.659 "name": "BaseBdev4", 00:11:39.659 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:39.659 "is_configured": true, 00:11:39.659 "data_offset": 2048, 00:11:39.659 "data_size": 63488 00:11:39.659 } 00:11:39.659 ] 00:11:39.659 }' 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.659 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e191d196-15c1-4a2a-a1d4-8220ba393abd 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.227 [2024-11-20 17:04:03.983217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.227 [2024-11-20 17:04:03.983508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.227 [2024-11-20 17:04:03.983532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.227 [2024-11-20 17:04:03.983901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:40.227 NewBaseBdev 00:11:40.227 [2024-11-20 17:04:03.984133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.227 [2024-11-20 17:04:03.984155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.227 [2024-11-20 17:04:03.984314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.227 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.228 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.228 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.228 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.228 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.228 17:04:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.228 [ 00:11:40.228 { 00:11:40.228 "name": "NewBaseBdev", 00:11:40.228 "aliases": [ 00:11:40.228 "e191d196-15c1-4a2a-a1d4-8220ba393abd" 00:11:40.228 ], 00:11:40.228 "product_name": "Malloc disk", 00:11:40.228 "block_size": 512, 00:11:40.228 "num_blocks": 65536, 00:11:40.228 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:40.228 "assigned_rate_limits": { 00:11:40.228 "rw_ios_per_sec": 0, 00:11:40.228 "rw_mbytes_per_sec": 0, 00:11:40.228 "r_mbytes_per_sec": 0, 00:11:40.228 "w_mbytes_per_sec": 0 00:11:40.228 }, 00:11:40.228 "claimed": true, 00:11:40.228 "claim_type": "exclusive_write", 00:11:40.228 "zoned": false, 00:11:40.228 "supported_io_types": { 00:11:40.228 "read": true, 00:11:40.228 "write": true, 00:11:40.228 "unmap": true, 00:11:40.228 "flush": true, 00:11:40.228 "reset": true, 00:11:40.228 "nvme_admin": false, 00:11:40.228 "nvme_io": false, 00:11:40.228 "nvme_io_md": false, 00:11:40.228 "write_zeroes": true, 00:11:40.228 "zcopy": true, 00:11:40.228 "get_zone_info": false, 00:11:40.228 "zone_management": false, 00:11:40.228 "zone_append": false, 00:11:40.228 "compare": false, 00:11:40.228 "compare_and_write": false, 00:11:40.228 "abort": true, 00:11:40.228 "seek_hole": false, 00:11:40.228 "seek_data": false, 00:11:40.228 "copy": true, 00:11:40.228 "nvme_iov_md": false 00:11:40.228 }, 00:11:40.228 "memory_domains": [ 00:11:40.228 { 00:11:40.228 "dma_device_id": "system", 00:11:40.228 "dma_device_type": 1 00:11:40.228 }, 00:11:40.228 { 00:11:40.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.228 "dma_device_type": 2 00:11:40.228 } 00:11:40.228 ], 00:11:40.228 "driver_specific": {} 00:11:40.228 } 00:11:40.228 ] 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.228 "name": "Existed_Raid", 00:11:40.228 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:40.228 "strip_size_kb": 0, 00:11:40.228 "state": "online", 00:11:40.228 "raid_level": "raid1", 00:11:40.228 "superblock": true, 00:11:40.228 "num_base_bdevs": 4, 00:11:40.228 "num_base_bdevs_discovered": 4, 00:11:40.228 "num_base_bdevs_operational": 4, 00:11:40.228 "base_bdevs_list": [ 00:11:40.228 { 00:11:40.228 "name": "NewBaseBdev", 00:11:40.228 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:40.228 "is_configured": true, 00:11:40.228 "data_offset": 2048, 00:11:40.228 "data_size": 63488 00:11:40.228 }, 00:11:40.228 { 00:11:40.228 "name": "BaseBdev2", 00:11:40.228 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:40.228 "is_configured": true, 00:11:40.228 "data_offset": 2048, 00:11:40.228 "data_size": 63488 00:11:40.228 }, 00:11:40.228 { 00:11:40.228 "name": "BaseBdev3", 00:11:40.228 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:40.228 "is_configured": true, 00:11:40.228 "data_offset": 2048, 00:11:40.228 "data_size": 63488 00:11:40.228 }, 00:11:40.228 { 00:11:40.228 "name": "BaseBdev4", 00:11:40.228 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:40.228 "is_configured": true, 00:11:40.228 "data_offset": 2048, 00:11:40.228 "data_size": 63488 00:11:40.228 } 00:11:40.228 ] 00:11:40.228 }' 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.228 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.795 [2024-11-20 17:04:04.523888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.795 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.795 "name": "Existed_Raid", 00:11:40.795 "aliases": [ 00:11:40.795 "1dcb20d3-879b-434b-9fb6-1fe7a59616d3" 00:11:40.795 ], 00:11:40.795 "product_name": "Raid Volume", 00:11:40.795 "block_size": 512, 00:11:40.795 "num_blocks": 63488, 00:11:40.795 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:40.795 "assigned_rate_limits": { 00:11:40.795 "rw_ios_per_sec": 0, 00:11:40.795 "rw_mbytes_per_sec": 0, 00:11:40.795 "r_mbytes_per_sec": 0, 00:11:40.795 "w_mbytes_per_sec": 0 00:11:40.795 }, 00:11:40.795 "claimed": false, 00:11:40.795 "zoned": false, 00:11:40.795 "supported_io_types": { 00:11:40.795 "read": true, 00:11:40.795 "write": true, 00:11:40.795 "unmap": false, 00:11:40.795 "flush": false, 00:11:40.795 "reset": true, 00:11:40.795 "nvme_admin": false, 00:11:40.795 "nvme_io": false, 00:11:40.795 "nvme_io_md": false, 00:11:40.795 "write_zeroes": true, 00:11:40.795 "zcopy": false, 00:11:40.795 "get_zone_info": false, 00:11:40.795 "zone_management": false, 00:11:40.795 "zone_append": false, 00:11:40.795 "compare": false, 00:11:40.795 "compare_and_write": false, 00:11:40.795 "abort": false, 00:11:40.795 "seek_hole": false, 00:11:40.795 "seek_data": false, 00:11:40.795 "copy": false, 00:11:40.795 "nvme_iov_md": false 00:11:40.795 }, 00:11:40.795 "memory_domains": [ 00:11:40.795 { 00:11:40.795 "dma_device_id": "system", 00:11:40.795 "dma_device_type": 1 00:11:40.795 }, 00:11:40.795 { 00:11:40.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.795 "dma_device_type": 2 00:11:40.795 }, 00:11:40.795 { 00:11:40.796 "dma_device_id": "system", 00:11:40.796 "dma_device_type": 1 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.796 "dma_device_type": 2 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "dma_device_id": "system", 00:11:40.796 "dma_device_type": 1 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.796 "dma_device_type": 2 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "dma_device_id": "system", 00:11:40.796 "dma_device_type": 1 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.796 "dma_device_type": 2 00:11:40.796 } 00:11:40.796 ], 00:11:40.796 "driver_specific": { 00:11:40.796 "raid": { 00:11:40.796 "uuid": "1dcb20d3-879b-434b-9fb6-1fe7a59616d3", 00:11:40.796 "strip_size_kb": 0, 00:11:40.796 "state": "online", 00:11:40.796 "raid_level": "raid1", 00:11:40.796 "superblock": true, 00:11:40.796 "num_base_bdevs": 4, 00:11:40.796 "num_base_bdevs_discovered": 4, 00:11:40.796 "num_base_bdevs_operational": 4, 00:11:40.796 "base_bdevs_list": [ 00:11:40.796 { 00:11:40.796 "name": "NewBaseBdev", 00:11:40.796 "uuid": "e191d196-15c1-4a2a-a1d4-8220ba393abd", 00:11:40.796 "is_configured": true, 00:11:40.796 "data_offset": 2048, 00:11:40.796 "data_size": 63488 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "name": "BaseBdev2", 00:11:40.796 "uuid": "681f6936-4fc6-4226-9337-51382714d912", 00:11:40.796 "is_configured": true, 00:11:40.796 "data_offset": 2048, 00:11:40.796 "data_size": 63488 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "name": "BaseBdev3", 00:11:40.796 "uuid": "c3936987-93dd-4be2-a6df-002efc2e727e", 00:11:40.796 "is_configured": true, 00:11:40.796 "data_offset": 2048, 00:11:40.796 "data_size": 63488 00:11:40.796 }, 00:11:40.796 { 00:11:40.796 "name": "BaseBdev4", 00:11:40.796 "uuid": "c2eea63e-545e-4688-bfc8-3d46b6ad63c4", 00:11:40.796 "is_configured": true, 00:11:40.796 "data_offset": 2048, 00:11:40.796 "data_size": 63488 00:11:40.796 } 00:11:40.796 ] 00:11:40.796 } 00:11:40.796 } 00:11:40.796 }' 00:11:40.796 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.796 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:40.796 BaseBdev2 00:11:40.796 BaseBdev3 00:11:40.796 BaseBdev4' 00:11:40.796 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 [2024-11-20 17:04:04.883552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.055 [2024-11-20 17:04:04.883586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.055 [2024-11-20 17:04:04.883679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.055 [2024-11-20 17:04:04.884047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.055 [2024-11-20 17:04:04.884079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73819 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73819 ']' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73819 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73819 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.055 killing process with pid 73819 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73819' 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73819 00:11:41.055 [2024-11-20 17:04:04.920950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.055 17:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73819 00:11:41.624 [2024-11-20 17:04:05.228334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.560 17:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:42.560 00:11:42.560 real 0m12.547s 00:11:42.560 user 0m21.027s 00:11:42.560 sys 0m1.693s 00:11:42.561 17:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.561 17:04:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.561 ************************************ 00:11:42.561 END TEST raid_state_function_test_sb 00:11:42.561 ************************************ 00:11:42.561 17:04:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:42.561 17:04:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.561 17:04:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.561 17:04:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.561 ************************************ 00:11:42.561 START TEST raid_superblock_test 00:11:42.561 ************************************ 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74497 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74497 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74497 ']' 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.561 17:04:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.561 [2024-11-20 17:04:06.379825] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:42.561 [2024-11-20 17:04:06.380724] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74497 ] 00:11:42.820 [2024-11-20 17:04:06.564347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.820 [2024-11-20 17:04:06.684781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.079 [2024-11-20 17:04:06.870502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.079 [2024-11-20 17:04:06.870549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 malloc1 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 [2024-11-20 17:04:07.391256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.664 [2024-11-20 17:04:07.391338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.664 [2024-11-20 17:04:07.391369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:43.664 [2024-11-20 17:04:07.391386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.664 [2024-11-20 17:04:07.394492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.664 [2024-11-20 17:04:07.394551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.664 pt1 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 malloc2 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 [2024-11-20 17:04:07.444414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:43.664 [2024-11-20 17:04:07.444599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.664 [2024-11-20 17:04:07.444679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:43.664 [2024-11-20 17:04:07.444814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.664 [2024-11-20 17:04:07.447583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.664 [2024-11-20 17:04:07.447734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:43.664 pt2 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 malloc3 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.664 [2024-11-20 17:04:07.511599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:43.664 [2024-11-20 17:04:07.511663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.664 [2024-11-20 17:04:07.511697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:43.664 [2024-11-20 17:04:07.511713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.664 [2024-11-20 17:04:07.514600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.664 [2024-11-20 17:04:07.514641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:43.664 pt3 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.664 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.924 malloc4 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.924 [2024-11-20 17:04:07.565679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:43.924 [2024-11-20 17:04:07.565758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.924 [2024-11-20 17:04:07.565839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:43.924 [2024-11-20 17:04:07.565855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.924 [2024-11-20 17:04:07.568639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.924 [2024-11-20 17:04:07.568680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:43.924 pt4 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.924 [2024-11-20 17:04:07.577702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.924 [2024-11-20 17:04:07.580219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.924 [2024-11-20 17:04:07.580474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.924 [2024-11-20 17:04:07.580579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:43.924 [2024-11-20 17:04:07.580892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:43.924 [2024-11-20 17:04:07.580931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.924 [2024-11-20 17:04:07.581259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.924 [2024-11-20 17:04:07.581457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:43.924 [2024-11-20 17:04:07.581479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:43.924 [2024-11-20 17:04:07.581696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.924 "name": "raid_bdev1", 00:11:43.924 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:43.924 "strip_size_kb": 0, 00:11:43.924 "state": "online", 00:11:43.924 "raid_level": "raid1", 00:11:43.924 "superblock": true, 00:11:43.924 "num_base_bdevs": 4, 00:11:43.924 "num_base_bdevs_discovered": 4, 00:11:43.924 "num_base_bdevs_operational": 4, 00:11:43.924 "base_bdevs_list": [ 00:11:43.924 { 00:11:43.924 "name": "pt1", 00:11:43.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.924 "is_configured": true, 00:11:43.924 "data_offset": 2048, 00:11:43.924 "data_size": 63488 00:11:43.924 }, 00:11:43.924 { 00:11:43.924 "name": "pt2", 00:11:43.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.924 "is_configured": true, 00:11:43.924 "data_offset": 2048, 00:11:43.924 "data_size": 63488 00:11:43.924 }, 00:11:43.924 { 00:11:43.924 "name": "pt3", 00:11:43.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.924 "is_configured": true, 00:11:43.924 "data_offset": 2048, 00:11:43.924 "data_size": 63488 00:11:43.924 }, 00:11:43.924 { 00:11:43.924 "name": "pt4", 00:11:43.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.924 "is_configured": true, 00:11:43.924 "data_offset": 2048, 00:11:43.924 "data_size": 63488 00:11:43.924 } 00:11:43.924 ] 00:11:43.924 }' 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.924 17:04:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.493 [2024-11-20 17:04:08.114326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.493 "name": "raid_bdev1", 00:11:44.493 "aliases": [ 00:11:44.493 "e4dc35ef-74e4-4298-bad8-c147665d1074" 00:11:44.493 ], 00:11:44.493 "product_name": "Raid Volume", 00:11:44.493 "block_size": 512, 00:11:44.493 "num_blocks": 63488, 00:11:44.493 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:44.493 "assigned_rate_limits": { 00:11:44.493 "rw_ios_per_sec": 0, 00:11:44.493 "rw_mbytes_per_sec": 0, 00:11:44.493 "r_mbytes_per_sec": 0, 00:11:44.493 "w_mbytes_per_sec": 0 00:11:44.493 }, 00:11:44.493 "claimed": false, 00:11:44.493 "zoned": false, 00:11:44.493 "supported_io_types": { 00:11:44.493 "read": true, 00:11:44.493 "write": true, 00:11:44.493 "unmap": false, 00:11:44.493 "flush": false, 00:11:44.493 "reset": true, 00:11:44.493 "nvme_admin": false, 00:11:44.493 "nvme_io": false, 00:11:44.493 "nvme_io_md": false, 00:11:44.493 "write_zeroes": true, 00:11:44.493 "zcopy": false, 00:11:44.493 "get_zone_info": false, 00:11:44.493 "zone_management": false, 00:11:44.493 "zone_append": false, 00:11:44.493 "compare": false, 00:11:44.493 "compare_and_write": false, 00:11:44.493 "abort": false, 00:11:44.493 "seek_hole": false, 00:11:44.493 "seek_data": false, 00:11:44.493 "copy": false, 00:11:44.493 "nvme_iov_md": false 00:11:44.493 }, 00:11:44.493 "memory_domains": [ 00:11:44.493 { 00:11:44.493 "dma_device_id": "system", 00:11:44.493 "dma_device_type": 1 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.493 "dma_device_type": 2 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "system", 00:11:44.493 "dma_device_type": 1 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.493 "dma_device_type": 2 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "system", 00:11:44.493 "dma_device_type": 1 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.493 "dma_device_type": 2 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "system", 00:11:44.493 "dma_device_type": 1 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.493 "dma_device_type": 2 00:11:44.493 } 00:11:44.493 ], 00:11:44.493 "driver_specific": { 00:11:44.493 "raid": { 00:11:44.493 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:44.493 "strip_size_kb": 0, 00:11:44.493 "state": "online", 00:11:44.493 "raid_level": "raid1", 00:11:44.493 "superblock": true, 00:11:44.493 "num_base_bdevs": 4, 00:11:44.493 "num_base_bdevs_discovered": 4, 00:11:44.493 "num_base_bdevs_operational": 4, 00:11:44.493 "base_bdevs_list": [ 00:11:44.493 { 00:11:44.493 "name": "pt1", 00:11:44.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.493 "is_configured": true, 00:11:44.493 "data_offset": 2048, 00:11:44.493 "data_size": 63488 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "name": "pt2", 00:11:44.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.493 "is_configured": true, 00:11:44.493 "data_offset": 2048, 00:11:44.493 "data_size": 63488 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "name": "pt3", 00:11:44.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.493 "is_configured": true, 00:11:44.493 "data_offset": 2048, 00:11:44.493 "data_size": 63488 00:11:44.493 }, 00:11:44.493 { 00:11:44.493 "name": "pt4", 00:11:44.493 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.493 "is_configured": true, 00:11:44.493 "data_offset": 2048, 00:11:44.493 "data_size": 63488 00:11:44.493 } 00:11:44.493 ] 00:11:44.493 } 00:11:44.493 } 00:11:44.493 }' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:44.493 pt2 00:11:44.493 pt3 00:11:44.493 pt4' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.493 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.752 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.752 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.752 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 [2024-11-20 17:04:08.486363] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4dc35ef-74e4-4298-bad8-c147665d1074 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4dc35ef-74e4-4298-bad8-c147665d1074 ']' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 [2024-11-20 17:04:08.534005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.753 [2024-11-20 17:04:08.534031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.753 [2024-11-20 17:04:08.534152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.753 [2024-11-20 17:04:08.534265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.753 [2024-11-20 17:04:08.534287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.753 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 [2024-11-20 17:04:08.694075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:45.013 [2024-11-20 17:04:08.696656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:45.013 [2024-11-20 17:04:08.696716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:45.013 [2024-11-20 17:04:08.696784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:45.013 [2024-11-20 17:04:08.696888] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:45.013 [2024-11-20 17:04:08.696958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:45.013 [2024-11-20 17:04:08.696992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:45.013 [2024-11-20 17:04:08.697022] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:45.013 [2024-11-20 17:04:08.697043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.013 [2024-11-20 17:04:08.697058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:45.013 request: 00:11:45.013 { 00:11:45.013 "name": "raid_bdev1", 00:11:45.013 "raid_level": "raid1", 00:11:45.013 "base_bdevs": [ 00:11:45.013 "malloc1", 00:11:45.013 "malloc2", 00:11:45.013 "malloc3", 00:11:45.013 "malloc4" 00:11:45.013 ], 00:11:45.013 "superblock": false, 00:11:45.013 "method": "bdev_raid_create", 00:11:45.013 "req_id": 1 00:11:45.013 } 00:11:45.013 Got JSON-RPC error response 00:11:45.013 response: 00:11:45.013 { 00:11:45.013 "code": -17, 00:11:45.013 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:45.013 } 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 [2024-11-20 17:04:08.762085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.013 [2024-11-20 17:04:08.762334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.013 [2024-11-20 17:04:08.762396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.013 [2024-11-20 17:04:08.762512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.013 [2024-11-20 17:04:08.765330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.013 [2024-11-20 17:04:08.765517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.013 [2024-11-20 17:04:08.765730] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.013 [2024-11-20 17:04:08.765835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.013 pt1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.013 "name": "raid_bdev1", 00:11:45.013 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:45.013 "strip_size_kb": 0, 00:11:45.013 "state": "configuring", 00:11:45.013 "raid_level": "raid1", 00:11:45.013 "superblock": true, 00:11:45.013 "num_base_bdevs": 4, 00:11:45.013 "num_base_bdevs_discovered": 1, 00:11:45.013 "num_base_bdevs_operational": 4, 00:11:45.013 "base_bdevs_list": [ 00:11:45.013 { 00:11:45.013 "name": "pt1", 00:11:45.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.013 "is_configured": true, 00:11:45.013 "data_offset": 2048, 00:11:45.013 "data_size": 63488 00:11:45.013 }, 00:11:45.013 { 00:11:45.013 "name": null, 00:11:45.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.013 "is_configured": false, 00:11:45.013 "data_offset": 2048, 00:11:45.013 "data_size": 63488 00:11:45.013 }, 00:11:45.013 { 00:11:45.013 "name": null, 00:11:45.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.013 "is_configured": false, 00:11:45.013 "data_offset": 2048, 00:11:45.013 "data_size": 63488 00:11:45.013 }, 00:11:45.013 { 00:11:45.013 "name": null, 00:11:45.013 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.013 "is_configured": false, 00:11:45.013 "data_offset": 2048, 00:11:45.013 "data_size": 63488 00:11:45.013 } 00:11:45.013 ] 00:11:45.013 }' 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.013 17:04:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 [2024-11-20 17:04:09.298336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.582 [2024-11-20 17:04:09.298412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.582 [2024-11-20 17:04:09.298442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:45.582 [2024-11-20 17:04:09.298460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.582 [2024-11-20 17:04:09.298984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.582 [2024-11-20 17:04:09.299012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.582 [2024-11-20 17:04:09.299104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:45.582 [2024-11-20 17:04:09.299147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.582 pt2 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 [2024-11-20 17:04:09.306304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.582 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.582 "name": "raid_bdev1", 00:11:45.582 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:45.582 "strip_size_kb": 0, 00:11:45.582 "state": "configuring", 00:11:45.582 "raid_level": "raid1", 00:11:45.582 "superblock": true, 00:11:45.582 "num_base_bdevs": 4, 00:11:45.582 "num_base_bdevs_discovered": 1, 00:11:45.582 "num_base_bdevs_operational": 4, 00:11:45.582 "base_bdevs_list": [ 00:11:45.582 { 00:11:45.582 "name": "pt1", 00:11:45.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.582 "is_configured": true, 00:11:45.583 "data_offset": 2048, 00:11:45.583 "data_size": 63488 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": null, 00:11:45.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.583 "is_configured": false, 00:11:45.583 "data_offset": 0, 00:11:45.583 "data_size": 63488 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": null, 00:11:45.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.583 "is_configured": false, 00:11:45.583 "data_offset": 2048, 00:11:45.583 "data_size": 63488 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": null, 00:11:45.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.583 "is_configured": false, 00:11:45.583 "data_offset": 2048, 00:11:45.583 "data_size": 63488 00:11:45.583 } 00:11:45.583 ] 00:11:45.583 }' 00:11:45.583 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.583 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 [2024-11-20 17:04:09.822438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.151 [2024-11-20 17:04:09.822528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.151 [2024-11-20 17:04:09.822558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.151 [2024-11-20 17:04:09.822572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.151 [2024-11-20 17:04:09.823188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.151 [2024-11-20 17:04:09.823218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.151 [2024-11-20 17:04:09.823313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.151 [2024-11-20 17:04:09.823358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.151 pt2 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 [2024-11-20 17:04:09.834436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.151 [2024-11-20 17:04:09.834680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.151 [2024-11-20 17:04:09.834717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:46.151 [2024-11-20 17:04:09.834732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.151 [2024-11-20 17:04:09.835199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.151 [2024-11-20 17:04:09.835224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.151 [2024-11-20 17:04:09.835304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.151 [2024-11-20 17:04:09.835332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.151 pt3 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 [2024-11-20 17:04:09.842416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:46.151 [2024-11-20 17:04:09.842477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.151 [2024-11-20 17:04:09.842511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:46.151 [2024-11-20 17:04:09.842523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.151 [2024-11-20 17:04:09.842997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.151 [2024-11-20 17:04:09.843036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:46.151 [2024-11-20 17:04:09.843115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:46.151 [2024-11-20 17:04:09.843180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:46.151 [2024-11-20 17:04:09.843388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.151 [2024-11-20 17:04:09.843411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.151 [2024-11-20 17:04:09.843747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.151 [2024-11-20 17:04:09.844029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.151 [2024-11-20 17:04:09.844050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.151 [2024-11-20 17:04:09.844252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.151 pt4 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.151 "name": "raid_bdev1", 00:11:46.151 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:46.151 "strip_size_kb": 0, 00:11:46.151 "state": "online", 00:11:46.151 "raid_level": "raid1", 00:11:46.151 "superblock": true, 00:11:46.151 "num_base_bdevs": 4, 00:11:46.151 "num_base_bdevs_discovered": 4, 00:11:46.151 "num_base_bdevs_operational": 4, 00:11:46.151 "base_bdevs_list": [ 00:11:46.151 { 00:11:46.151 "name": "pt1", 00:11:46.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.151 "name": "pt2", 00:11:46.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.151 "name": "pt3", 00:11:46.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.151 "name": "pt4", 00:11:46.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 } 00:11:46.151 ] 00:11:46.151 }' 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.151 17:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.718 [2024-11-20 17:04:10.379110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.718 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.718 "name": "raid_bdev1", 00:11:46.718 "aliases": [ 00:11:46.718 "e4dc35ef-74e4-4298-bad8-c147665d1074" 00:11:46.719 ], 00:11:46.719 "product_name": "Raid Volume", 00:11:46.719 "block_size": 512, 00:11:46.719 "num_blocks": 63488, 00:11:46.719 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:46.719 "assigned_rate_limits": { 00:11:46.719 "rw_ios_per_sec": 0, 00:11:46.719 "rw_mbytes_per_sec": 0, 00:11:46.719 "r_mbytes_per_sec": 0, 00:11:46.719 "w_mbytes_per_sec": 0 00:11:46.719 }, 00:11:46.719 "claimed": false, 00:11:46.719 "zoned": false, 00:11:46.719 "supported_io_types": { 00:11:46.719 "read": true, 00:11:46.719 "write": true, 00:11:46.719 "unmap": false, 00:11:46.719 "flush": false, 00:11:46.719 "reset": true, 00:11:46.719 "nvme_admin": false, 00:11:46.719 "nvme_io": false, 00:11:46.719 "nvme_io_md": false, 00:11:46.719 "write_zeroes": true, 00:11:46.719 "zcopy": false, 00:11:46.719 "get_zone_info": false, 00:11:46.719 "zone_management": false, 00:11:46.719 "zone_append": false, 00:11:46.719 "compare": false, 00:11:46.719 "compare_and_write": false, 00:11:46.719 "abort": false, 00:11:46.719 "seek_hole": false, 00:11:46.719 "seek_data": false, 00:11:46.719 "copy": false, 00:11:46.719 "nvme_iov_md": false 00:11:46.719 }, 00:11:46.719 "memory_domains": [ 00:11:46.719 { 00:11:46.719 "dma_device_id": "system", 00:11:46.719 "dma_device_type": 1 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.719 "dma_device_type": 2 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "system", 00:11:46.719 "dma_device_type": 1 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.719 "dma_device_type": 2 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "system", 00:11:46.719 "dma_device_type": 1 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.719 "dma_device_type": 2 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "system", 00:11:46.719 "dma_device_type": 1 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.719 "dma_device_type": 2 00:11:46.719 } 00:11:46.719 ], 00:11:46.719 "driver_specific": { 00:11:46.719 "raid": { 00:11:46.719 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:46.719 "strip_size_kb": 0, 00:11:46.719 "state": "online", 00:11:46.719 "raid_level": "raid1", 00:11:46.719 "superblock": true, 00:11:46.719 "num_base_bdevs": 4, 00:11:46.719 "num_base_bdevs_discovered": 4, 00:11:46.719 "num_base_bdevs_operational": 4, 00:11:46.719 "base_bdevs_list": [ 00:11:46.719 { 00:11:46.719 "name": "pt1", 00:11:46.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.719 "is_configured": true, 00:11:46.719 "data_offset": 2048, 00:11:46.719 "data_size": 63488 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "name": "pt2", 00:11:46.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.719 "is_configured": true, 00:11:46.719 "data_offset": 2048, 00:11:46.719 "data_size": 63488 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "name": "pt3", 00:11:46.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.719 "is_configured": true, 00:11:46.719 "data_offset": 2048, 00:11:46.719 "data_size": 63488 00:11:46.719 }, 00:11:46.719 { 00:11:46.719 "name": "pt4", 00:11:46.719 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.719 "is_configured": true, 00:11:46.719 "data_offset": 2048, 00:11:46.719 "data_size": 63488 00:11:46.719 } 00:11:46.719 ] 00:11:46.719 } 00:11:46.719 } 00:11:46.719 }' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.719 pt2 00:11:46.719 pt3 00:11:46.719 pt4' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.719 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.978 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:46.979 [2024-11-20 17:04:10.743075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4dc35ef-74e4-4298-bad8-c147665d1074 '!=' e4dc35ef-74e4-4298-bad8-c147665d1074 ']' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.979 [2024-11-20 17:04:10.794751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.979 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.334 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.334 "name": "raid_bdev1", 00:11:47.334 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:47.334 "strip_size_kb": 0, 00:11:47.334 "state": "online", 00:11:47.334 "raid_level": "raid1", 00:11:47.334 "superblock": true, 00:11:47.334 "num_base_bdevs": 4, 00:11:47.334 "num_base_bdevs_discovered": 3, 00:11:47.334 "num_base_bdevs_operational": 3, 00:11:47.334 "base_bdevs_list": [ 00:11:47.334 { 00:11:47.334 "name": null, 00:11:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.334 "is_configured": false, 00:11:47.334 "data_offset": 0, 00:11:47.334 "data_size": 63488 00:11:47.334 }, 00:11:47.334 { 00:11:47.334 "name": "pt2", 00:11:47.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.334 "is_configured": true, 00:11:47.334 "data_offset": 2048, 00:11:47.334 "data_size": 63488 00:11:47.334 }, 00:11:47.334 { 00:11:47.334 "name": "pt3", 00:11:47.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.334 "is_configured": true, 00:11:47.334 "data_offset": 2048, 00:11:47.334 "data_size": 63488 00:11:47.334 }, 00:11:47.334 { 00:11:47.334 "name": "pt4", 00:11:47.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.334 "is_configured": true, 00:11:47.334 "data_offset": 2048, 00:11:47.334 "data_size": 63488 00:11:47.335 } 00:11:47.335 ] 00:11:47.335 }' 00:11:47.335 17:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.335 17:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 [2024-11-20 17:04:11.326907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.607 [2024-11-20 17:04:11.326943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.607 [2024-11-20 17:04:11.327030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.607 [2024-11-20 17:04:11.327172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.607 [2024-11-20 17:04:11.327186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.607 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.608 [2024-11-20 17:04:11.418922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.608 [2024-11-20 17:04:11.418978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.608 [2024-11-20 17:04:11.419006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:47.608 [2024-11-20 17:04:11.419021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.608 [2024-11-20 17:04:11.421889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.608 [2024-11-20 17:04:11.421928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.608 [2024-11-20 17:04:11.422027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.608 [2024-11-20 17:04:11.422086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.608 pt2 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.608 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.866 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.866 "name": "raid_bdev1", 00:11:47.866 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:47.866 "strip_size_kb": 0, 00:11:47.866 "state": "configuring", 00:11:47.866 "raid_level": "raid1", 00:11:47.866 "superblock": true, 00:11:47.866 "num_base_bdevs": 4, 00:11:47.866 "num_base_bdevs_discovered": 1, 00:11:47.867 "num_base_bdevs_operational": 3, 00:11:47.867 "base_bdevs_list": [ 00:11:47.867 { 00:11:47.867 "name": null, 00:11:47.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.867 "is_configured": false, 00:11:47.867 "data_offset": 2048, 00:11:47.867 "data_size": 63488 00:11:47.867 }, 00:11:47.867 { 00:11:47.867 "name": "pt2", 00:11:47.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.867 "is_configured": true, 00:11:47.867 "data_offset": 2048, 00:11:47.867 "data_size": 63488 00:11:47.867 }, 00:11:47.867 { 00:11:47.867 "name": null, 00:11:47.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.867 "is_configured": false, 00:11:47.867 "data_offset": 2048, 00:11:47.867 "data_size": 63488 00:11:47.867 }, 00:11:47.867 { 00:11:47.867 "name": null, 00:11:47.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.867 "is_configured": false, 00:11:47.867 "data_offset": 2048, 00:11:47.867 "data_size": 63488 00:11:47.867 } 00:11:47.867 ] 00:11:47.867 }' 00:11:47.867 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.867 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.125 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:48.125 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:48.125 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.125 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.126 [2024-11-20 17:04:11.927152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.126 [2024-11-20 17:04:11.927251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.126 [2024-11-20 17:04:11.927283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:48.126 [2024-11-20 17:04:11.927298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.126 [2024-11-20 17:04:11.927913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.126 [2024-11-20 17:04:11.927938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.126 [2024-11-20 17:04:11.928037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.126 [2024-11-20 17:04:11.928082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.126 pt3 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.126 "name": "raid_bdev1", 00:11:48.126 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:48.126 "strip_size_kb": 0, 00:11:48.126 "state": "configuring", 00:11:48.126 "raid_level": "raid1", 00:11:48.126 "superblock": true, 00:11:48.126 "num_base_bdevs": 4, 00:11:48.126 "num_base_bdevs_discovered": 2, 00:11:48.126 "num_base_bdevs_operational": 3, 00:11:48.126 "base_bdevs_list": [ 00:11:48.126 { 00:11:48.126 "name": null, 00:11:48.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.126 "is_configured": false, 00:11:48.126 "data_offset": 2048, 00:11:48.126 "data_size": 63488 00:11:48.126 }, 00:11:48.126 { 00:11:48.126 "name": "pt2", 00:11:48.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.126 "is_configured": true, 00:11:48.126 "data_offset": 2048, 00:11:48.126 "data_size": 63488 00:11:48.126 }, 00:11:48.126 { 00:11:48.126 "name": "pt3", 00:11:48.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.126 "is_configured": true, 00:11:48.126 "data_offset": 2048, 00:11:48.126 "data_size": 63488 00:11:48.126 }, 00:11:48.126 { 00:11:48.126 "name": null, 00:11:48.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.126 "is_configured": false, 00:11:48.126 "data_offset": 2048, 00:11:48.126 "data_size": 63488 00:11:48.126 } 00:11:48.126 ] 00:11:48.126 }' 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.126 17:04:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 [2024-11-20 17:04:12.463325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:48.694 [2024-11-20 17:04:12.463413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.694 [2024-11-20 17:04:12.463447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:48.694 [2024-11-20 17:04:12.463507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.694 [2024-11-20 17:04:12.464062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.694 [2024-11-20 17:04:12.464234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:48.694 [2024-11-20 17:04:12.464367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:48.694 [2024-11-20 17:04:12.464401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:48.694 [2024-11-20 17:04:12.464573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.694 [2024-11-20 17:04:12.464588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.694 [2024-11-20 17:04:12.464911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:48.694 [2024-11-20 17:04:12.465113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.694 [2024-11-20 17:04:12.465157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:48.694 [2024-11-20 17:04:12.465331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.694 pt4 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.694 "name": "raid_bdev1", 00:11:48.694 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:48.694 "strip_size_kb": 0, 00:11:48.694 "state": "online", 00:11:48.694 "raid_level": "raid1", 00:11:48.694 "superblock": true, 00:11:48.694 "num_base_bdevs": 4, 00:11:48.694 "num_base_bdevs_discovered": 3, 00:11:48.694 "num_base_bdevs_operational": 3, 00:11:48.694 "base_bdevs_list": [ 00:11:48.694 { 00:11:48.694 "name": null, 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.694 "is_configured": false, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 }, 00:11:48.694 { 00:11:48.694 "name": "pt2", 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.694 "is_configured": true, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 }, 00:11:48.694 { 00:11:48.694 "name": "pt3", 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.694 "is_configured": true, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 }, 00:11:48.694 { 00:11:48.694 "name": "pt4", 00:11:48.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.694 "is_configured": true, 00:11:48.694 "data_offset": 2048, 00:11:48.694 "data_size": 63488 00:11:48.694 } 00:11:48.694 ] 00:11:48.694 }' 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.694 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.263 [2024-11-20 17:04:12.975397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.263 [2024-11-20 17:04:12.975429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.263 [2024-11-20 17:04:12.975552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.263 [2024-11-20 17:04:12.975648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.263 [2024-11-20 17:04:12.975670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.263 17:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.263 [2024-11-20 17:04:13.047449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:49.263 [2024-11-20 17:04:13.047548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.263 [2024-11-20 17:04:13.047575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:49.263 [2024-11-20 17:04:13.047594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.263 [2024-11-20 17:04:13.050563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.263 [2024-11-20 17:04:13.050624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:49.263 [2024-11-20 17:04:13.050725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:49.263 [2024-11-20 17:04:13.050814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.263 [2024-11-20 17:04:13.051022] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:49.263 [2024-11-20 17:04:13.051046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.263 [2024-11-20 17:04:13.051066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:49.263 [2024-11-20 17:04:13.051153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.263 [2024-11-20 17:04:13.051331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.263 pt1 00:11:49.263 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.264 "name": "raid_bdev1", 00:11:49.264 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:49.264 "strip_size_kb": 0, 00:11:49.264 "state": "configuring", 00:11:49.264 "raid_level": "raid1", 00:11:49.264 "superblock": true, 00:11:49.264 "num_base_bdevs": 4, 00:11:49.264 "num_base_bdevs_discovered": 2, 00:11:49.264 "num_base_bdevs_operational": 3, 00:11:49.264 "base_bdevs_list": [ 00:11:49.264 { 00:11:49.264 "name": null, 00:11:49.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.264 "is_configured": false, 00:11:49.264 "data_offset": 2048, 00:11:49.264 "data_size": 63488 00:11:49.264 }, 00:11:49.264 { 00:11:49.264 "name": "pt2", 00:11:49.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.264 "is_configured": true, 00:11:49.264 "data_offset": 2048, 00:11:49.264 "data_size": 63488 00:11:49.264 }, 00:11:49.264 { 00:11:49.264 "name": "pt3", 00:11:49.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.264 "is_configured": true, 00:11:49.264 "data_offset": 2048, 00:11:49.264 "data_size": 63488 00:11:49.264 }, 00:11:49.264 { 00:11:49.264 "name": null, 00:11:49.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.264 "is_configured": false, 00:11:49.264 "data_offset": 2048, 00:11:49.264 "data_size": 63488 00:11:49.264 } 00:11:49.264 ] 00:11:49.264 }' 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.264 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.831 [2024-11-20 17:04:13.619664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.831 [2024-11-20 17:04:13.619729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.831 [2024-11-20 17:04:13.619769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:49.831 [2024-11-20 17:04:13.619787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.831 [2024-11-20 17:04:13.620314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.831 [2024-11-20 17:04:13.620338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.831 [2024-11-20 17:04:13.620424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:49.831 [2024-11-20 17:04:13.620467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.831 [2024-11-20 17:04:13.620614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:49.831 [2024-11-20 17:04:13.620629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.831 [2024-11-20 17:04:13.620984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:49.831 [2024-11-20 17:04:13.621178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:49.831 [2024-11-20 17:04:13.621197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:49.831 [2024-11-20 17:04:13.621363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.831 pt4 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.831 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.831 "name": "raid_bdev1", 00:11:49.831 "uuid": "e4dc35ef-74e4-4298-bad8-c147665d1074", 00:11:49.831 "strip_size_kb": 0, 00:11:49.831 "state": "online", 00:11:49.831 "raid_level": "raid1", 00:11:49.831 "superblock": true, 00:11:49.831 "num_base_bdevs": 4, 00:11:49.831 "num_base_bdevs_discovered": 3, 00:11:49.831 "num_base_bdevs_operational": 3, 00:11:49.831 "base_bdevs_list": [ 00:11:49.831 { 00:11:49.831 "name": null, 00:11:49.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.831 "is_configured": false, 00:11:49.831 "data_offset": 2048, 00:11:49.831 "data_size": 63488 00:11:49.831 }, 00:11:49.831 { 00:11:49.831 "name": "pt2", 00:11:49.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.832 "is_configured": true, 00:11:49.832 "data_offset": 2048, 00:11:49.832 "data_size": 63488 00:11:49.832 }, 00:11:49.832 { 00:11:49.832 "name": "pt3", 00:11:49.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.832 "is_configured": true, 00:11:49.832 "data_offset": 2048, 00:11:49.832 "data_size": 63488 00:11:49.832 }, 00:11:49.832 { 00:11:49.832 "name": "pt4", 00:11:49.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:49.832 "is_configured": true, 00:11:49.832 "data_offset": 2048, 00:11:49.832 "data_size": 63488 00:11:49.832 } 00:11:49.832 ] 00:11:49.832 }' 00:11:49.832 17:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.832 17:04:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.399 [2024-11-20 17:04:14.208203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e4dc35ef-74e4-4298-bad8-c147665d1074 '!=' e4dc35ef-74e4-4298-bad8-c147665d1074 ']' 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74497 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74497 ']' 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74497 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:50.399 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.400 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74497 00:11:50.658 killing process with pid 74497 00:11:50.658 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.658 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.658 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74497' 00:11:50.658 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74497 00:11:50.658 [2024-11-20 17:04:14.281651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.658 [2024-11-20 17:04:14.281738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.658 17:04:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74497 00:11:50.658 [2024-11-20 17:04:14.281890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.658 [2024-11-20 17:04:14.281912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:50.917 [2024-11-20 17:04:14.593989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.855 17:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.855 00:11:51.855 real 0m9.289s 00:11:51.855 user 0m15.422s 00:11:51.855 sys 0m1.293s 00:11:51.855 ************************************ 00:11:51.855 END TEST raid_superblock_test 00:11:51.855 ************************************ 00:11:51.855 17:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.855 17:04:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 17:04:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:51.855 17:04:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:51.855 17:04:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.855 17:04:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 ************************************ 00:11:51.855 START TEST raid_read_error_test 00:11:51.855 ************************************ 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pnKGhrDK4N 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74997 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74997 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74997 ']' 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.855 17:04:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 [2024-11-20 17:04:15.714655] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:51.855 [2024-11-20 17:04:15.714838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74997 ] 00:11:52.114 [2024-11-20 17:04:15.884670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.373 [2024-11-20 17:04:16.013657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.373 [2024-11-20 17:04:16.202812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.373 [2024-11-20 17:04:16.202860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 BaseBdev1_malloc 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 true 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.942 [2024-11-20 17:04:16.776498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.942 [2024-11-20 17:04:16.776588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.942 [2024-11-20 17:04:16.776614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.942 [2024-11-20 17:04:16.776629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.942 [2024-11-20 17:04:16.779427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.942 [2024-11-20 17:04:16.779495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.942 BaseBdev1 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.942 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 BaseBdev2_malloc 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 true 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 [2024-11-20 17:04:16.831862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.201 [2024-11-20 17:04:16.831952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.201 [2024-11-20 17:04:16.831983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.201 [2024-11-20 17:04:16.831998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.201 [2024-11-20 17:04:16.834872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.201 [2024-11-20 17:04:16.834938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.201 BaseBdev2 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 BaseBdev3_malloc 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 true 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.201 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.201 [2024-11-20 17:04:16.901335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.201 [2024-11-20 17:04:16.901410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.201 [2024-11-20 17:04:16.901434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.201 [2024-11-20 17:04:16.901450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.201 [2024-11-20 17:04:16.904418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.201 [2024-11-20 17:04:16.904478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.201 BaseBdev3 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 BaseBdev4_malloc 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 true 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 [2024-11-20 17:04:16.958799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:53.202 [2024-11-20 17:04:16.958883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.202 [2024-11-20 17:04:16.958908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:53.202 [2024-11-20 17:04:16.958924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.202 [2024-11-20 17:04:16.961575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.202 [2024-11-20 17:04:16.961638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:53.202 BaseBdev4 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 [2024-11-20 17:04:16.966880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.202 [2024-11-20 17:04:16.969349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.202 [2024-11-20 17:04:16.969589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.202 [2024-11-20 17:04:16.969837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.202 [2024-11-20 17:04:16.970294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:53.202 [2024-11-20 17:04:16.970442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.202 [2024-11-20 17:04:16.970816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:53.202 [2024-11-20 17:04:16.971165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:53.202 [2024-11-20 17:04:16.971313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:53.202 [2024-11-20 17:04:16.971769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.202 17:04:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.202 17:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.202 "name": "raid_bdev1", 00:11:53.202 "uuid": "9534f8ca-f276-481f-8783-e0caa470f41a", 00:11:53.202 "strip_size_kb": 0, 00:11:53.202 "state": "online", 00:11:53.202 "raid_level": "raid1", 00:11:53.202 "superblock": true, 00:11:53.202 "num_base_bdevs": 4, 00:11:53.202 "num_base_bdevs_discovered": 4, 00:11:53.202 "num_base_bdevs_operational": 4, 00:11:53.202 "base_bdevs_list": [ 00:11:53.202 { 00:11:53.202 "name": "BaseBdev1", 00:11:53.202 "uuid": "64e4a062-ab23-5039-9679-2cb172eb5613", 00:11:53.202 "is_configured": true, 00:11:53.202 "data_offset": 2048, 00:11:53.202 "data_size": 63488 00:11:53.202 }, 00:11:53.202 { 00:11:53.202 "name": "BaseBdev2", 00:11:53.202 "uuid": "bf873b65-bcf5-55ce-8899-ef444a7c1b88", 00:11:53.202 "is_configured": true, 00:11:53.202 "data_offset": 2048, 00:11:53.202 "data_size": 63488 00:11:53.202 }, 00:11:53.202 { 00:11:53.202 "name": "BaseBdev3", 00:11:53.202 "uuid": "2fc97d20-d6d0-554f-a896-7295a25bf3ae", 00:11:53.202 "is_configured": true, 00:11:53.202 "data_offset": 2048, 00:11:53.202 "data_size": 63488 00:11:53.202 }, 00:11:53.202 { 00:11:53.202 "name": "BaseBdev4", 00:11:53.202 "uuid": "5c75cfd8-8c1a-56a6-a472-d385cc6c262f", 00:11:53.202 "is_configured": true, 00:11:53.202 "data_offset": 2048, 00:11:53.202 "data_size": 63488 00:11:53.202 } 00:11:53.202 ] 00:11:53.202 }' 00:11:53.202 17:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.202 17:04:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.770 17:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.770 17:04:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.770 [2024-11-20 17:04:17.565310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:54.707 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.708 "name": "raid_bdev1", 00:11:54.708 "uuid": "9534f8ca-f276-481f-8783-e0caa470f41a", 00:11:54.708 "strip_size_kb": 0, 00:11:54.708 "state": "online", 00:11:54.708 "raid_level": "raid1", 00:11:54.708 "superblock": true, 00:11:54.708 "num_base_bdevs": 4, 00:11:54.708 "num_base_bdevs_discovered": 4, 00:11:54.708 "num_base_bdevs_operational": 4, 00:11:54.708 "base_bdevs_list": [ 00:11:54.708 { 00:11:54.708 "name": "BaseBdev1", 00:11:54.708 "uuid": "64e4a062-ab23-5039-9679-2cb172eb5613", 00:11:54.708 "is_configured": true, 00:11:54.708 "data_offset": 2048, 00:11:54.708 "data_size": 63488 00:11:54.708 }, 00:11:54.708 { 00:11:54.708 "name": "BaseBdev2", 00:11:54.708 "uuid": "bf873b65-bcf5-55ce-8899-ef444a7c1b88", 00:11:54.708 "is_configured": true, 00:11:54.708 "data_offset": 2048, 00:11:54.708 "data_size": 63488 00:11:54.708 }, 00:11:54.708 { 00:11:54.708 "name": "BaseBdev3", 00:11:54.708 "uuid": "2fc97d20-d6d0-554f-a896-7295a25bf3ae", 00:11:54.708 "is_configured": true, 00:11:54.708 "data_offset": 2048, 00:11:54.708 "data_size": 63488 00:11:54.708 }, 00:11:54.708 { 00:11:54.708 "name": "BaseBdev4", 00:11:54.708 "uuid": "5c75cfd8-8c1a-56a6-a472-d385cc6c262f", 00:11:54.708 "is_configured": true, 00:11:54.708 "data_offset": 2048, 00:11:54.708 "data_size": 63488 00:11:54.708 } 00:11:54.708 ] 00:11:54.708 }' 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.708 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.276 17:04:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.276 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.276 17:04:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.276 [2024-11-20 17:04:19.002148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.276 [2024-11-20 17:04:19.002354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.276 [2024-11-20 17:04:19.006006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.276 [2024-11-20 17:04:19.006297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.276 [2024-11-20 17:04:19.006598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:55.276 "results": [ 00:11:55.276 { 00:11:55.276 "job": "raid_bdev1", 00:11:55.276 "core_mask": "0x1", 00:11:55.276 "workload": "randrw", 00:11:55.276 "percentage": 50, 00:11:55.276 "status": "finished", 00:11:55.276 "queue_depth": 1, 00:11:55.276 "io_size": 131072, 00:11:55.276 "runtime": 1.434746, 00:11:55.276 "iops": 8228.63419727255, 00:11:55.276 "mibps": 1028.5792746590687, 00:11:55.276 "io_failed": 0, 00:11:55.276 "io_timeout": 0, 00:11:55.276 "avg_latency_us": 117.56503226402599, 00:11:55.276 "min_latency_us": 37.236363636363635, 00:11:55.276 "max_latency_us": 2115.0254545454545 00:11:55.276 } 00:11:55.276 ], 00:11:55.276 "core_count": 1 00:11:55.276 } 00:11:55.276 ee all in destruct 00:11:55.276 [2024-11-20 17:04:19.006738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74997 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74997 ']' 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74997 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74997 00:11:55.276 killing process with pid 74997 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74997' 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74997 00:11:55.276 [2024-11-20 17:04:19.036558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.276 17:04:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74997 00:11:55.538 [2024-11-20 17:04:19.294985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pnKGhrDK4N 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:56.475 ************************************ 00:11:56.475 END TEST raid_read_error_test 00:11:56.475 ************************************ 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:56.475 00:11:56.475 real 0m4.704s 00:11:56.475 user 0m5.795s 00:11:56.475 sys 0m0.588s 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.475 17:04:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.734 17:04:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:56.734 17:04:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:56.734 17:04:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.734 17:04:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.734 ************************************ 00:11:56.734 START TEST raid_write_error_test 00:11:56.735 ************************************ 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rf59wUEkyg 00:11:56.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75137 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75137 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75137 ']' 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.735 17:04:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.735 [2024-11-20 17:04:20.496118] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:11:56.735 [2024-11-20 17:04:20.496370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75137 ] 00:11:56.995 [2024-11-20 17:04:20.673931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.995 [2024-11-20 17:04:20.788784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.254 [2024-11-20 17:04:20.976261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.254 [2024-11-20 17:04:20.976525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 BaseBdev1_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 true 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 [2024-11-20 17:04:21.525636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.823 [2024-11-20 17:04:21.525731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.823 [2024-11-20 17:04:21.525758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.823 [2024-11-20 17:04:21.525821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.823 [2024-11-20 17:04:21.528625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.823 [2024-11-20 17:04:21.528890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.823 BaseBdev1 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 BaseBdev2_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 true 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 [2024-11-20 17:04:21.579822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.823 [2024-11-20 17:04:21.580037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.823 [2024-11-20 17:04:21.580075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.823 [2024-11-20 17:04:21.580103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.823 [2024-11-20 17:04:21.583130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.823 [2024-11-20 17:04:21.583368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.823 BaseBdev2 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 BaseBdev3_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 true 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.823 [2024-11-20 17:04:21.650517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.823 [2024-11-20 17:04:21.650595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.823 [2024-11-20 17:04:21.650621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.823 [2024-11-20 17:04:21.650637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.823 [2024-11-20 17:04:21.653556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.823 [2024-11-20 17:04:21.653620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.823 BaseBdev3 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.823 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.083 BaseBdev4_malloc 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.083 true 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.083 [2024-11-20 17:04:21.711576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:58.083 [2024-11-20 17:04:21.711641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.083 [2024-11-20 17:04:21.711668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:58.083 [2024-11-20 17:04:21.711685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.083 [2024-11-20 17:04:21.714429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.083 [2024-11-20 17:04:21.714491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:58.083 BaseBdev4 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.083 [2024-11-20 17:04:21.719640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.083 [2024-11-20 17:04:21.722048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.083 [2024-11-20 17:04:21.722158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.083 [2024-11-20 17:04:21.722246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.083 [2024-11-20 17:04:21.722505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:58.083 [2024-11-20 17:04:21.722529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.083 [2024-11-20 17:04:21.722842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:58.083 [2024-11-20 17:04:21.723074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:58.083 [2024-11-20 17:04:21.723094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:58.083 [2024-11-20 17:04:21.723367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.083 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.084 "name": "raid_bdev1", 00:11:58.084 "uuid": "ca5afdd5-6559-451d-81c6-5bd6593b5233", 00:11:58.084 "strip_size_kb": 0, 00:11:58.084 "state": "online", 00:11:58.084 "raid_level": "raid1", 00:11:58.084 "superblock": true, 00:11:58.084 "num_base_bdevs": 4, 00:11:58.084 "num_base_bdevs_discovered": 4, 00:11:58.084 "num_base_bdevs_operational": 4, 00:11:58.084 "base_bdevs_list": [ 00:11:58.084 { 00:11:58.084 "name": "BaseBdev1", 00:11:58.084 "uuid": "1572862f-827e-5b1f-8090-a6024da725cf", 00:11:58.084 "is_configured": true, 00:11:58.084 "data_offset": 2048, 00:11:58.084 "data_size": 63488 00:11:58.084 }, 00:11:58.084 { 00:11:58.084 "name": "BaseBdev2", 00:11:58.084 "uuid": "41f0190c-6101-51c5-a0d0-7583b889f87b", 00:11:58.084 "is_configured": true, 00:11:58.084 "data_offset": 2048, 00:11:58.084 "data_size": 63488 00:11:58.084 }, 00:11:58.084 { 00:11:58.084 "name": "BaseBdev3", 00:11:58.084 "uuid": "f5a1d989-ff43-5d7b-a730-d4a97b74d1d7", 00:11:58.084 "is_configured": true, 00:11:58.084 "data_offset": 2048, 00:11:58.084 "data_size": 63488 00:11:58.084 }, 00:11:58.084 { 00:11:58.084 "name": "BaseBdev4", 00:11:58.084 "uuid": "89080b4b-2856-5336-9849-dbd1bdedff2a", 00:11:58.084 "is_configured": true, 00:11:58.084 "data_offset": 2048, 00:11:58.084 "data_size": 63488 00:11:58.084 } 00:11:58.084 ] 00:11:58.084 }' 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.084 17:04:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.652 17:04:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.652 17:04:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.652 [2024-11-20 17:04:22.337082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.590 [2024-11-20 17:04:23.217257] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:59.590 [2024-11-20 17:04:23.217330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.590 [2024-11-20 17:04:23.217595] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.590 "name": "raid_bdev1", 00:11:59.590 "uuid": "ca5afdd5-6559-451d-81c6-5bd6593b5233", 00:11:59.590 "strip_size_kb": 0, 00:11:59.590 "state": "online", 00:11:59.590 "raid_level": "raid1", 00:11:59.590 "superblock": true, 00:11:59.590 "num_base_bdevs": 4, 00:11:59.590 "num_base_bdevs_discovered": 3, 00:11:59.590 "num_base_bdevs_operational": 3, 00:11:59.590 "base_bdevs_list": [ 00:11:59.590 { 00:11:59.590 "name": null, 00:11:59.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.590 "is_configured": false, 00:11:59.590 "data_offset": 0, 00:11:59.590 "data_size": 63488 00:11:59.590 }, 00:11:59.590 { 00:11:59.590 "name": "BaseBdev2", 00:11:59.590 "uuid": "41f0190c-6101-51c5-a0d0-7583b889f87b", 00:11:59.590 "is_configured": true, 00:11:59.590 "data_offset": 2048, 00:11:59.590 "data_size": 63488 00:11:59.590 }, 00:11:59.590 { 00:11:59.590 "name": "BaseBdev3", 00:11:59.590 "uuid": "f5a1d989-ff43-5d7b-a730-d4a97b74d1d7", 00:11:59.590 "is_configured": true, 00:11:59.590 "data_offset": 2048, 00:11:59.590 "data_size": 63488 00:11:59.590 }, 00:11:59.590 { 00:11:59.590 "name": "BaseBdev4", 00:11:59.590 "uuid": "89080b4b-2856-5336-9849-dbd1bdedff2a", 00:11:59.590 "is_configured": true, 00:11:59.590 "data_offset": 2048, 00:11:59.590 "data_size": 63488 00:11:59.590 } 00:11:59.590 ] 00:11:59.590 }' 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.590 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.158 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.158 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.158 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.158 [2024-11-20 17:04:23.768733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.158 [2024-11-20 17:04:23.768952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.158 [2024-11-20 17:04:23.772453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.158 { 00:12:00.158 "results": [ 00:12:00.158 { 00:12:00.158 "job": "raid_bdev1", 00:12:00.158 "core_mask": "0x1", 00:12:00.159 "workload": "randrw", 00:12:00.159 "percentage": 50, 00:12:00.159 "status": "finished", 00:12:00.159 "queue_depth": 1, 00:12:00.159 "io_size": 131072, 00:12:00.159 "runtime": 1.429727, 00:12:00.159 "iops": 9048.580603150112, 00:12:00.159 "mibps": 1131.072575393764, 00:12:00.159 "io_failed": 0, 00:12:00.159 "io_timeout": 0, 00:12:00.159 "avg_latency_us": 106.46492892127583, 00:12:00.159 "min_latency_us": 37.236363636363635, 00:12:00.159 "max_latency_us": 1817.1345454545456 00:12:00.159 } 00:12:00.159 ], 00:12:00.159 "core_count": 1 00:12:00.159 } 00:12:00.159 [2024-11-20 17:04:23.772669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.159 [2024-11-20 17:04:23.772905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.159 [2024-11-20 17:04:23.772928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75137 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75137 ']' 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75137 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75137 00:12:00.159 killing process with pid 75137 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75137' 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75137 00:12:00.159 [2024-11-20 17:04:23.810446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.159 17:04:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75137 00:12:00.418 [2024-11-20 17:04:24.080721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.354 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rf59wUEkyg 00:12:01.354 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:01.355 00:12:01.355 real 0m4.731s 00:12:01.355 user 0m5.853s 00:12:01.355 sys 0m0.582s 00:12:01.355 ************************************ 00:12:01.355 END TEST raid_write_error_test 00:12:01.355 ************************************ 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.355 17:04:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.355 17:04:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:01.355 17:04:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:01.355 17:04:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:01.355 17:04:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:01.355 17:04:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.355 17:04:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.355 ************************************ 00:12:01.355 START TEST raid_rebuild_test 00:12:01.355 ************************************ 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:01.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75286 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75286 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75286 ']' 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.355 17:04:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.614 [2024-11-20 17:04:25.282684] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:12:01.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:01.614 Zero copy mechanism will not be used. 00:12:01.614 [2024-11-20 17:04:25.283142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75286 ] 00:12:01.614 [2024-11-20 17:04:25.465873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.873 [2024-11-20 17:04:25.579616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.132 [2024-11-20 17:04:25.779794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.132 [2024-11-20 17:04:25.779840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.391 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.391 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:02.391 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:02.391 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.392 BaseBdev1_malloc 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.392 [2024-11-20 17:04:26.244287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:02.392 [2024-11-20 17:04:26.244545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.392 [2024-11-20 17:04:26.244585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:02.392 [2024-11-20 17:04:26.244604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.392 [2024-11-20 17:04:26.247360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.392 [2024-11-20 17:04:26.247422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:02.392 BaseBdev1 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.392 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.651 BaseBdev2_malloc 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.651 [2024-11-20 17:04:26.297831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:02.651 [2024-11-20 17:04:26.298065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.651 [2024-11-20 17:04:26.298105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:02.651 [2024-11-20 17:04:26.298124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.651 [2024-11-20 17:04:26.300957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.651 [2024-11-20 17:04:26.301002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:02.651 BaseBdev2 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.651 spare_malloc 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.651 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 spare_delay 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 [2024-11-20 17:04:26.363316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:02.652 [2024-11-20 17:04:26.363400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.652 [2024-11-20 17:04:26.363441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:02.652 [2024-11-20 17:04:26.363458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.652 [2024-11-20 17:04:26.366469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.652 [2024-11-20 17:04:26.366529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:02.652 spare 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 [2024-11-20 17:04:26.371399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.652 [2024-11-20 17:04:26.373735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.652 [2024-11-20 17:04:26.373896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:02.652 [2024-11-20 17:04:26.373917] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:02.652 [2024-11-20 17:04:26.374278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:02.652 [2024-11-20 17:04:26.374478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:02.652 [2024-11-20 17:04:26.374495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:02.652 [2024-11-20 17:04:26.374692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.652 "name": "raid_bdev1", 00:12:02.652 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:02.652 "strip_size_kb": 0, 00:12:02.652 "state": "online", 00:12:02.652 "raid_level": "raid1", 00:12:02.652 "superblock": false, 00:12:02.652 "num_base_bdevs": 2, 00:12:02.652 "num_base_bdevs_discovered": 2, 00:12:02.652 "num_base_bdevs_operational": 2, 00:12:02.652 "base_bdevs_list": [ 00:12:02.652 { 00:12:02.652 "name": "BaseBdev1", 00:12:02.652 "uuid": "79516e3d-0f48-55dd-b750-d948c73525fe", 00:12:02.652 "is_configured": true, 00:12:02.652 "data_offset": 0, 00:12:02.652 "data_size": 65536 00:12:02.652 }, 00:12:02.652 { 00:12:02.652 "name": "BaseBdev2", 00:12:02.652 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:02.652 "is_configured": true, 00:12:02.652 "data_offset": 0, 00:12:02.652 "data_size": 65536 00:12:02.652 } 00:12:02.652 ] 00:12:02.652 }' 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.652 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:03.220 [2024-11-20 17:04:26.903914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.220 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.221 17:04:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:03.480 [2024-11-20 17:04:27.283742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:03.480 /dev/nbd0 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.480 1+0 records in 00:12:03.480 1+0 records out 00:12:03.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615614 s, 6.7 MB/s 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.480 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:03.738 17:04:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:10.305 65536+0 records in 00:12:10.305 65536+0 records out 00:12:10.305 33554432 bytes (34 MB, 32 MiB) copied, 6.14603 s, 5.5 MB/s 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.305 [2024-11-20 17:04:33.781973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:10.305 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.306 [2024-11-20 17:04:33.814055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.306 "name": "raid_bdev1", 00:12:10.306 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:10.306 "strip_size_kb": 0, 00:12:10.306 "state": "online", 00:12:10.306 "raid_level": "raid1", 00:12:10.306 "superblock": false, 00:12:10.306 "num_base_bdevs": 2, 00:12:10.306 "num_base_bdevs_discovered": 1, 00:12:10.306 "num_base_bdevs_operational": 1, 00:12:10.306 "base_bdevs_list": [ 00:12:10.306 { 00:12:10.306 "name": null, 00:12:10.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.306 "is_configured": false, 00:12:10.306 "data_offset": 0, 00:12:10.306 "data_size": 65536 00:12:10.306 }, 00:12:10.306 { 00:12:10.306 "name": "BaseBdev2", 00:12:10.306 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:10.306 "is_configured": true, 00:12:10.306 "data_offset": 0, 00:12:10.306 "data_size": 65536 00:12:10.306 } 00:12:10.306 ] 00:12:10.306 }' 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.306 17:04:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.565 17:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.565 17:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.565 17:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.565 [2024-11-20 17:04:34.314265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.565 [2024-11-20 17:04:34.330534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:10.565 17:04:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.565 17:04:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:10.565 [2024-11-20 17:04:34.333162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.510 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.768 "name": "raid_bdev1", 00:12:11.768 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:11.768 "strip_size_kb": 0, 00:12:11.768 "state": "online", 00:12:11.768 "raid_level": "raid1", 00:12:11.768 "superblock": false, 00:12:11.768 "num_base_bdevs": 2, 00:12:11.768 "num_base_bdevs_discovered": 2, 00:12:11.768 "num_base_bdevs_operational": 2, 00:12:11.768 "process": { 00:12:11.768 "type": "rebuild", 00:12:11.768 "target": "spare", 00:12:11.768 "progress": { 00:12:11.768 "blocks": 20480, 00:12:11.768 "percent": 31 00:12:11.768 } 00:12:11.768 }, 00:12:11.768 "base_bdevs_list": [ 00:12:11.768 { 00:12:11.768 "name": "spare", 00:12:11.768 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:11.768 "is_configured": true, 00:12:11.768 "data_offset": 0, 00:12:11.768 "data_size": 65536 00:12:11.768 }, 00:12:11.768 { 00:12:11.768 "name": "BaseBdev2", 00:12:11.768 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:11.768 "is_configured": true, 00:12:11.768 "data_offset": 0, 00:12:11.768 "data_size": 65536 00:12:11.768 } 00:12:11.768 ] 00:12:11.768 }' 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.768 [2024-11-20 17:04:35.498038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.768 [2024-11-20 17:04:35.540889] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.768 [2024-11-20 17:04:35.540971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.768 [2024-11-20 17:04:35.540993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.768 [2024-11-20 17:04:35.541007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.768 "name": "raid_bdev1", 00:12:11.768 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:11.768 "strip_size_kb": 0, 00:12:11.768 "state": "online", 00:12:11.768 "raid_level": "raid1", 00:12:11.768 "superblock": false, 00:12:11.768 "num_base_bdevs": 2, 00:12:11.768 "num_base_bdevs_discovered": 1, 00:12:11.768 "num_base_bdevs_operational": 1, 00:12:11.768 "base_bdevs_list": [ 00:12:11.768 { 00:12:11.768 "name": null, 00:12:11.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.768 "is_configured": false, 00:12:11.768 "data_offset": 0, 00:12:11.768 "data_size": 65536 00:12:11.768 }, 00:12:11.768 { 00:12:11.768 "name": "BaseBdev2", 00:12:11.768 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:11.768 "is_configured": true, 00:12:11.768 "data_offset": 0, 00:12:11.768 "data_size": 65536 00:12:11.768 } 00:12:11.768 ] 00:12:11.768 }' 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.768 17:04:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.336 "name": "raid_bdev1", 00:12:12.336 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:12.336 "strip_size_kb": 0, 00:12:12.336 "state": "online", 00:12:12.336 "raid_level": "raid1", 00:12:12.336 "superblock": false, 00:12:12.336 "num_base_bdevs": 2, 00:12:12.336 "num_base_bdevs_discovered": 1, 00:12:12.336 "num_base_bdevs_operational": 1, 00:12:12.336 "base_bdevs_list": [ 00:12:12.336 { 00:12:12.336 "name": null, 00:12:12.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.336 "is_configured": false, 00:12:12.336 "data_offset": 0, 00:12:12.336 "data_size": 65536 00:12:12.336 }, 00:12:12.336 { 00:12:12.336 "name": "BaseBdev2", 00:12:12.336 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:12.336 "is_configured": true, 00:12:12.336 "data_offset": 0, 00:12:12.336 "data_size": 65536 00:12:12.336 } 00:12:12.336 ] 00:12:12.336 }' 00:12:12.336 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.595 [2024-11-20 17:04:36.266452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.595 [2024-11-20 17:04:36.281996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.595 17:04:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:12.595 [2024-11-20 17:04:36.284695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.532 "name": "raid_bdev1", 00:12:13.532 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:13.532 "strip_size_kb": 0, 00:12:13.532 "state": "online", 00:12:13.532 "raid_level": "raid1", 00:12:13.532 "superblock": false, 00:12:13.532 "num_base_bdevs": 2, 00:12:13.532 "num_base_bdevs_discovered": 2, 00:12:13.532 "num_base_bdevs_operational": 2, 00:12:13.532 "process": { 00:12:13.532 "type": "rebuild", 00:12:13.532 "target": "spare", 00:12:13.532 "progress": { 00:12:13.532 "blocks": 20480, 00:12:13.532 "percent": 31 00:12:13.532 } 00:12:13.532 }, 00:12:13.532 "base_bdevs_list": [ 00:12:13.532 { 00:12:13.532 "name": "spare", 00:12:13.532 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:13.532 "is_configured": true, 00:12:13.532 "data_offset": 0, 00:12:13.532 "data_size": 65536 00:12:13.532 }, 00:12:13.532 { 00:12:13.532 "name": "BaseBdev2", 00:12:13.532 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:13.532 "is_configured": true, 00:12:13.532 "data_offset": 0, 00:12:13.532 "data_size": 65536 00:12:13.532 } 00:12:13.532 ] 00:12:13.532 }' 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.532 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.792 "name": "raid_bdev1", 00:12:13.792 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:13.792 "strip_size_kb": 0, 00:12:13.792 "state": "online", 00:12:13.792 "raid_level": "raid1", 00:12:13.792 "superblock": false, 00:12:13.792 "num_base_bdevs": 2, 00:12:13.792 "num_base_bdevs_discovered": 2, 00:12:13.792 "num_base_bdevs_operational": 2, 00:12:13.792 "process": { 00:12:13.792 "type": "rebuild", 00:12:13.792 "target": "spare", 00:12:13.792 "progress": { 00:12:13.792 "blocks": 22528, 00:12:13.792 "percent": 34 00:12:13.792 } 00:12:13.792 }, 00:12:13.792 "base_bdevs_list": [ 00:12:13.792 { 00:12:13.792 "name": "spare", 00:12:13.792 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:13.792 "is_configured": true, 00:12:13.792 "data_offset": 0, 00:12:13.792 "data_size": 65536 00:12:13.792 }, 00:12:13.792 { 00:12:13.792 "name": "BaseBdev2", 00:12:13.792 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:13.792 "is_configured": true, 00:12:13.792 "data_offset": 0, 00:12:13.792 "data_size": 65536 00:12:13.792 } 00:12:13.792 ] 00:12:13.792 }' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.792 17:04:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.169 "name": "raid_bdev1", 00:12:15.169 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:15.169 "strip_size_kb": 0, 00:12:15.169 "state": "online", 00:12:15.169 "raid_level": "raid1", 00:12:15.169 "superblock": false, 00:12:15.169 "num_base_bdevs": 2, 00:12:15.169 "num_base_bdevs_discovered": 2, 00:12:15.169 "num_base_bdevs_operational": 2, 00:12:15.169 "process": { 00:12:15.169 "type": "rebuild", 00:12:15.169 "target": "spare", 00:12:15.169 "progress": { 00:12:15.169 "blocks": 47104, 00:12:15.169 "percent": 71 00:12:15.169 } 00:12:15.169 }, 00:12:15.169 "base_bdevs_list": [ 00:12:15.169 { 00:12:15.169 "name": "spare", 00:12:15.169 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:15.169 "is_configured": true, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 65536 00:12:15.169 }, 00:12:15.169 { 00:12:15.169 "name": "BaseBdev2", 00:12:15.169 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:15.169 "is_configured": true, 00:12:15.169 "data_offset": 0, 00:12:15.169 "data_size": 65536 00:12:15.169 } 00:12:15.169 ] 00:12:15.169 }' 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.169 17:04:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:15.737 [2024-11-20 17:04:39.505710] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:15.737 [2024-11-20 17:04:39.505848] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:15.737 [2024-11-20 17:04:39.505941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.996 "name": "raid_bdev1", 00:12:15.996 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:15.996 "strip_size_kb": 0, 00:12:15.996 "state": "online", 00:12:15.996 "raid_level": "raid1", 00:12:15.996 "superblock": false, 00:12:15.996 "num_base_bdevs": 2, 00:12:15.996 "num_base_bdevs_discovered": 2, 00:12:15.996 "num_base_bdevs_operational": 2, 00:12:15.996 "base_bdevs_list": [ 00:12:15.996 { 00:12:15.996 "name": "spare", 00:12:15.996 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:15.996 "is_configured": true, 00:12:15.996 "data_offset": 0, 00:12:15.996 "data_size": 65536 00:12:15.996 }, 00:12:15.996 { 00:12:15.996 "name": "BaseBdev2", 00:12:15.996 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:15.996 "is_configured": true, 00:12:15.996 "data_offset": 0, 00:12:15.996 "data_size": 65536 00:12:15.996 } 00:12:15.996 ] 00:12:15.996 }' 00:12:15.996 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.254 "name": "raid_bdev1", 00:12:16.254 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:16.254 "strip_size_kb": 0, 00:12:16.254 "state": "online", 00:12:16.254 "raid_level": "raid1", 00:12:16.254 "superblock": false, 00:12:16.254 "num_base_bdevs": 2, 00:12:16.254 "num_base_bdevs_discovered": 2, 00:12:16.254 "num_base_bdevs_operational": 2, 00:12:16.254 "base_bdevs_list": [ 00:12:16.254 { 00:12:16.254 "name": "spare", 00:12:16.254 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:16.254 "is_configured": true, 00:12:16.254 "data_offset": 0, 00:12:16.254 "data_size": 65536 00:12:16.254 }, 00:12:16.254 { 00:12:16.254 "name": "BaseBdev2", 00:12:16.254 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:16.254 "is_configured": true, 00:12:16.254 "data_offset": 0, 00:12:16.254 "data_size": 65536 00:12:16.254 } 00:12:16.254 ] 00:12:16.254 }' 00:12:16.254 17:04:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.254 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.513 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.513 "name": "raid_bdev1", 00:12:16.513 "uuid": "656c20e7-5707-4ec2-b77f-8d30f1def6dc", 00:12:16.513 "strip_size_kb": 0, 00:12:16.513 "state": "online", 00:12:16.513 "raid_level": "raid1", 00:12:16.513 "superblock": false, 00:12:16.513 "num_base_bdevs": 2, 00:12:16.513 "num_base_bdevs_discovered": 2, 00:12:16.513 "num_base_bdevs_operational": 2, 00:12:16.513 "base_bdevs_list": [ 00:12:16.513 { 00:12:16.513 "name": "spare", 00:12:16.513 "uuid": "d7990b66-ed04-5f76-96b8-dbb33abce318", 00:12:16.513 "is_configured": true, 00:12:16.513 "data_offset": 0, 00:12:16.513 "data_size": 65536 00:12:16.513 }, 00:12:16.513 { 00:12:16.513 "name": "BaseBdev2", 00:12:16.513 "uuid": "e807429c-0f9a-595a-8692-a1d8fb329df7", 00:12:16.513 "is_configured": true, 00:12:16.513 "data_offset": 0, 00:12:16.513 "data_size": 65536 00:12:16.513 } 00:12:16.513 ] 00:12:16.513 }' 00:12:16.513 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.513 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.771 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.771 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.771 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 [2024-11-20 17:04:40.640426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.029 [2024-11-20 17:04:40.640627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.029 [2024-11-20 17:04:40.640847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.029 [2024-11-20 17:04:40.641062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.029 [2024-11-20 17:04:40.641185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.029 17:04:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:17.288 /dev/nbd0 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.288 1+0 records in 00:12:17.288 1+0 records out 00:12:17.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294909 s, 13.9 MB/s 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.288 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:17.547 /dev/nbd1 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.547 1+0 records in 00:12:17.547 1+0 records out 00:12:17.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393834 s, 10.4 MB/s 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.547 17:04:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.806 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.065 17:04:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75286 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75286 ']' 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75286 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.324 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75286 00:12:18.324 killing process with pid 75286 00:12:18.324 Received shutdown signal, test time was about 60.000000 seconds 00:12:18.324 00:12:18.325 Latency(us) 00:12:18.325 [2024-11-20T17:04:42.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.325 [2024-11-20T17:04:42.194Z] =================================================================================================================== 00:12:18.325 [2024-11-20T17:04:42.194Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:18.325 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.325 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.325 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75286' 00:12:18.325 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75286 00:12:18.325 [2024-11-20 17:04:42.139825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.325 17:04:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75286 00:12:18.583 [2024-11-20 17:04:42.370142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.522 17:04:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:19.522 00:12:19.522 real 0m18.173s 00:12:19.522 user 0m20.587s 00:12:19.522 sys 0m3.323s 00:12:19.522 17:04:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.522 ************************************ 00:12:19.522 END TEST raid_rebuild_test 00:12:19.522 ************************************ 00:12:19.522 17:04:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.522 17:04:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:19.522 17:04:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:19.522 17:04:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.522 17:04:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.780 ************************************ 00:12:19.780 START TEST raid_rebuild_test_sb 00:12:19.780 ************************************ 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75732 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75732 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75732 ']' 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.780 17:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.780 [2024-11-20 17:04:43.491037] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:12:19.781 [2024-11-20 17:04:43.491457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:19.781 Zero copy mechanism will not be used. 00:12:19.781 -allocations --file-prefix=spdk_pid75732 ] 00:12:20.039 [2024-11-20 17:04:43.661029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.039 [2024-11-20 17:04:43.786094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.299 [2024-11-20 17:04:43.978105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.299 [2024-11-20 17:04:43.978354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 BaseBdev1_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 [2024-11-20 17:04:44.478010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:20.867 [2024-11-20 17:04:44.478107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.867 [2024-11-20 17:04:44.478137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.867 [2024-11-20 17:04:44.478154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.867 [2024-11-20 17:04:44.481031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.867 [2024-11-20 17:04:44.481090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.867 BaseBdev1 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 BaseBdev2_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 [2024-11-20 17:04:44.529568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:20.867 [2024-11-20 17:04:44.529639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.867 [2024-11-20 17:04:44.529671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.867 [2024-11-20 17:04:44.529690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.867 [2024-11-20 17:04:44.532434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.867 [2024-11-20 17:04:44.532473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.867 BaseBdev2 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 spare_malloc 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 spare_delay 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 [2024-11-20 17:04:44.596106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:20.867 [2024-11-20 17:04:44.596198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.867 [2024-11-20 17:04:44.596224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:20.867 [2024-11-20 17:04:44.596241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.867 [2024-11-20 17:04:44.598928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.867 [2024-11-20 17:04:44.599004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:20.867 spare 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.867 [2024-11-20 17:04:44.604165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.867 [2024-11-20 17:04:44.606391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.867 [2024-11-20 17:04:44.606608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:20.867 [2024-11-20 17:04:44.606644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.867 [2024-11-20 17:04:44.606960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:20.867 [2024-11-20 17:04:44.607193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:20.867 [2024-11-20 17:04:44.607208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:20.867 [2024-11-20 17:04:44.607378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.867 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.868 "name": "raid_bdev1", 00:12:20.868 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:20.868 "strip_size_kb": 0, 00:12:20.868 "state": "online", 00:12:20.868 "raid_level": "raid1", 00:12:20.868 "superblock": true, 00:12:20.868 "num_base_bdevs": 2, 00:12:20.868 "num_base_bdevs_discovered": 2, 00:12:20.868 "num_base_bdevs_operational": 2, 00:12:20.868 "base_bdevs_list": [ 00:12:20.868 { 00:12:20.868 "name": "BaseBdev1", 00:12:20.868 "uuid": "56b9c469-4a6f-5c22-b1af-8dff54a06416", 00:12:20.868 "is_configured": true, 00:12:20.868 "data_offset": 2048, 00:12:20.868 "data_size": 63488 00:12:20.868 }, 00:12:20.868 { 00:12:20.868 "name": "BaseBdev2", 00:12:20.868 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:20.868 "is_configured": true, 00:12:20.868 "data_offset": 2048, 00:12:20.868 "data_size": 63488 00:12:20.868 } 00:12:20.868 ] 00:12:20.868 }' 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.868 17:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.434 [2024-11-20 17:04:45.112657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:21.434 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:21.435 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:21.692 [2024-11-20 17:04:45.436510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:21.692 /dev/nbd0 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:21.692 1+0 records in 00:12:21.692 1+0 records out 00:12:21.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255168 s, 16.1 MB/s 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:21.692 17:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:28.293 63488+0 records in 00:12:28.293 63488+0 records out 00:12:28.293 32505856 bytes (33 MB, 31 MiB) copied, 5.71487 s, 5.7 MB/s 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:28.293 [2024-11-20 17:04:51.476951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.293 [2024-11-20 17:04:51.505024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.293 "name": "raid_bdev1", 00:12:28.293 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:28.293 "strip_size_kb": 0, 00:12:28.293 "state": "online", 00:12:28.293 "raid_level": "raid1", 00:12:28.293 "superblock": true, 00:12:28.293 "num_base_bdevs": 2, 00:12:28.293 "num_base_bdevs_discovered": 1, 00:12:28.293 "num_base_bdevs_operational": 1, 00:12:28.293 "base_bdevs_list": [ 00:12:28.293 { 00:12:28.293 "name": null, 00:12:28.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.293 "is_configured": false, 00:12:28.293 "data_offset": 0, 00:12:28.293 "data_size": 63488 00:12:28.293 }, 00:12:28.293 { 00:12:28.293 "name": "BaseBdev2", 00:12:28.293 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:28.293 "is_configured": true, 00:12:28.293 "data_offset": 2048, 00:12:28.293 "data_size": 63488 00:12:28.293 } 00:12:28.293 ] 00:12:28.293 }' 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.293 17:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.293 17:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.293 17:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.293 17:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.293 [2024-11-20 17:04:52.041274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.293 [2024-11-20 17:04:52.057254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:28.293 17:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.293 17:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:28.293 [2024-11-20 17:04:52.059849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.229 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.487 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.487 "name": "raid_bdev1", 00:12:29.487 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:29.487 "strip_size_kb": 0, 00:12:29.487 "state": "online", 00:12:29.487 "raid_level": "raid1", 00:12:29.487 "superblock": true, 00:12:29.487 "num_base_bdevs": 2, 00:12:29.487 "num_base_bdevs_discovered": 2, 00:12:29.487 "num_base_bdevs_operational": 2, 00:12:29.487 "process": { 00:12:29.487 "type": "rebuild", 00:12:29.487 "target": "spare", 00:12:29.487 "progress": { 00:12:29.487 "blocks": 20480, 00:12:29.487 "percent": 32 00:12:29.487 } 00:12:29.487 }, 00:12:29.487 "base_bdevs_list": [ 00:12:29.487 { 00:12:29.487 "name": "spare", 00:12:29.487 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:29.487 "is_configured": true, 00:12:29.487 "data_offset": 2048, 00:12:29.487 "data_size": 63488 00:12:29.487 }, 00:12:29.487 { 00:12:29.487 "name": "BaseBdev2", 00:12:29.487 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:29.487 "is_configured": true, 00:12:29.487 "data_offset": 2048, 00:12:29.487 "data_size": 63488 00:12:29.487 } 00:12:29.487 ] 00:12:29.487 }' 00:12:29.487 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.487 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.487 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.487 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.488 [2024-11-20 17:04:53.220887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.488 [2024-11-20 17:04:53.267891] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.488 [2024-11-20 17:04:53.267988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.488 [2024-11-20 17:04:53.268009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.488 [2024-11-20 17:04:53.268025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.488 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.746 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.746 "name": "raid_bdev1", 00:12:29.746 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:29.746 "strip_size_kb": 0, 00:12:29.746 "state": "online", 00:12:29.746 "raid_level": "raid1", 00:12:29.746 "superblock": true, 00:12:29.746 "num_base_bdevs": 2, 00:12:29.746 "num_base_bdevs_discovered": 1, 00:12:29.746 "num_base_bdevs_operational": 1, 00:12:29.746 "base_bdevs_list": [ 00:12:29.746 { 00:12:29.746 "name": null, 00:12:29.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.747 "is_configured": false, 00:12:29.747 "data_offset": 0, 00:12:29.747 "data_size": 63488 00:12:29.747 }, 00:12:29.747 { 00:12:29.747 "name": "BaseBdev2", 00:12:29.747 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:29.747 "is_configured": true, 00:12:29.747 "data_offset": 2048, 00:12:29.747 "data_size": 63488 00:12:29.747 } 00:12:29.747 ] 00:12:29.747 }' 00:12:29.747 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.747 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.005 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.006 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.264 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.264 "name": "raid_bdev1", 00:12:30.264 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:30.264 "strip_size_kb": 0, 00:12:30.264 "state": "online", 00:12:30.264 "raid_level": "raid1", 00:12:30.264 "superblock": true, 00:12:30.264 "num_base_bdevs": 2, 00:12:30.265 "num_base_bdevs_discovered": 1, 00:12:30.265 "num_base_bdevs_operational": 1, 00:12:30.265 "base_bdevs_list": [ 00:12:30.265 { 00:12:30.265 "name": null, 00:12:30.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.265 "is_configured": false, 00:12:30.265 "data_offset": 0, 00:12:30.265 "data_size": 63488 00:12:30.265 }, 00:12:30.265 { 00:12:30.265 "name": "BaseBdev2", 00:12:30.265 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:30.265 "is_configured": true, 00:12:30.265 "data_offset": 2048, 00:12:30.265 "data_size": 63488 00:12:30.265 } 00:12:30.265 ] 00:12:30.265 }' 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.265 17:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.265 [2024-11-20 17:04:53.986845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.265 [2024-11-20 17:04:54.002225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:30.265 17:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.265 17:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:30.265 [2024-11-20 17:04:54.004802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.200 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.201 "name": "raid_bdev1", 00:12:31.201 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:31.201 "strip_size_kb": 0, 00:12:31.201 "state": "online", 00:12:31.201 "raid_level": "raid1", 00:12:31.201 "superblock": true, 00:12:31.201 "num_base_bdevs": 2, 00:12:31.201 "num_base_bdevs_discovered": 2, 00:12:31.201 "num_base_bdevs_operational": 2, 00:12:31.201 "process": { 00:12:31.201 "type": "rebuild", 00:12:31.201 "target": "spare", 00:12:31.201 "progress": { 00:12:31.201 "blocks": 20480, 00:12:31.201 "percent": 32 00:12:31.201 } 00:12:31.201 }, 00:12:31.201 "base_bdevs_list": [ 00:12:31.201 { 00:12:31.201 "name": "spare", 00:12:31.201 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:31.201 "is_configured": true, 00:12:31.201 "data_offset": 2048, 00:12:31.201 "data_size": 63488 00:12:31.201 }, 00:12:31.201 { 00:12:31.201 "name": "BaseBdev2", 00:12:31.201 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:31.201 "is_configured": true, 00:12:31.201 "data_offset": 2048, 00:12:31.201 "data_size": 63488 00:12:31.201 } 00:12:31.201 ] 00:12:31.201 }' 00:12:31.201 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:31.460 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.460 "name": "raid_bdev1", 00:12:31.460 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:31.460 "strip_size_kb": 0, 00:12:31.460 "state": "online", 00:12:31.460 "raid_level": "raid1", 00:12:31.460 "superblock": true, 00:12:31.460 "num_base_bdevs": 2, 00:12:31.460 "num_base_bdevs_discovered": 2, 00:12:31.460 "num_base_bdevs_operational": 2, 00:12:31.460 "process": { 00:12:31.460 "type": "rebuild", 00:12:31.460 "target": "spare", 00:12:31.460 "progress": { 00:12:31.460 "blocks": 22528, 00:12:31.460 "percent": 35 00:12:31.460 } 00:12:31.460 }, 00:12:31.460 "base_bdevs_list": [ 00:12:31.460 { 00:12:31.460 "name": "spare", 00:12:31.460 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:31.460 "is_configured": true, 00:12:31.460 "data_offset": 2048, 00:12:31.460 "data_size": 63488 00:12:31.460 }, 00:12:31.460 { 00:12:31.460 "name": "BaseBdev2", 00:12:31.460 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:31.460 "is_configured": true, 00:12:31.460 "data_offset": 2048, 00:12:31.460 "data_size": 63488 00:12:31.460 } 00:12:31.460 ] 00:12:31.460 }' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.460 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.718 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.718 17:04:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.667 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.668 "name": "raid_bdev1", 00:12:32.668 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:32.668 "strip_size_kb": 0, 00:12:32.668 "state": "online", 00:12:32.668 "raid_level": "raid1", 00:12:32.668 "superblock": true, 00:12:32.668 "num_base_bdevs": 2, 00:12:32.668 "num_base_bdevs_discovered": 2, 00:12:32.668 "num_base_bdevs_operational": 2, 00:12:32.668 "process": { 00:12:32.668 "type": "rebuild", 00:12:32.668 "target": "spare", 00:12:32.668 "progress": { 00:12:32.668 "blocks": 47104, 00:12:32.668 "percent": 74 00:12:32.668 } 00:12:32.668 }, 00:12:32.668 "base_bdevs_list": [ 00:12:32.668 { 00:12:32.668 "name": "spare", 00:12:32.668 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:32.668 "is_configured": true, 00:12:32.668 "data_offset": 2048, 00:12:32.668 "data_size": 63488 00:12:32.668 }, 00:12:32.668 { 00:12:32.668 "name": "BaseBdev2", 00:12:32.668 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:32.668 "is_configured": true, 00:12:32.668 "data_offset": 2048, 00:12:32.668 "data_size": 63488 00:12:32.668 } 00:12:32.668 ] 00:12:32.668 }' 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.668 17:04:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.612 [2024-11-20 17:04:57.124862] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:33.612 [2024-11-20 17:04:57.124974] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:33.612 [2024-11-20 17:04:57.125137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.871 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.871 "name": "raid_bdev1", 00:12:33.871 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:33.871 "strip_size_kb": 0, 00:12:33.871 "state": "online", 00:12:33.871 "raid_level": "raid1", 00:12:33.871 "superblock": true, 00:12:33.871 "num_base_bdevs": 2, 00:12:33.871 "num_base_bdevs_discovered": 2, 00:12:33.871 "num_base_bdevs_operational": 2, 00:12:33.871 "base_bdevs_list": [ 00:12:33.871 { 00:12:33.871 "name": "spare", 00:12:33.872 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:33.872 "is_configured": true, 00:12:33.872 "data_offset": 2048, 00:12:33.872 "data_size": 63488 00:12:33.872 }, 00:12:33.872 { 00:12:33.872 "name": "BaseBdev2", 00:12:33.872 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:33.872 "is_configured": true, 00:12:33.872 "data_offset": 2048, 00:12:33.872 "data_size": 63488 00:12:33.872 } 00:12:33.872 ] 00:12:33.872 }' 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.872 "name": "raid_bdev1", 00:12:33.872 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:33.872 "strip_size_kb": 0, 00:12:33.872 "state": "online", 00:12:33.872 "raid_level": "raid1", 00:12:33.872 "superblock": true, 00:12:33.872 "num_base_bdevs": 2, 00:12:33.872 "num_base_bdevs_discovered": 2, 00:12:33.872 "num_base_bdevs_operational": 2, 00:12:33.872 "base_bdevs_list": [ 00:12:33.872 { 00:12:33.872 "name": "spare", 00:12:33.872 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:33.872 "is_configured": true, 00:12:33.872 "data_offset": 2048, 00:12:33.872 "data_size": 63488 00:12:33.872 }, 00:12:33.872 { 00:12:33.872 "name": "BaseBdev2", 00:12:33.872 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:33.872 "is_configured": true, 00:12:33.872 "data_offset": 2048, 00:12:33.872 "data_size": 63488 00:12:33.872 } 00:12:33.872 ] 00:12:33.872 }' 00:12:33.872 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.130 "name": "raid_bdev1", 00:12:34.130 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:34.130 "strip_size_kb": 0, 00:12:34.130 "state": "online", 00:12:34.130 "raid_level": "raid1", 00:12:34.130 "superblock": true, 00:12:34.130 "num_base_bdevs": 2, 00:12:34.130 "num_base_bdevs_discovered": 2, 00:12:34.130 "num_base_bdevs_operational": 2, 00:12:34.130 "base_bdevs_list": [ 00:12:34.130 { 00:12:34.130 "name": "spare", 00:12:34.130 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:34.130 "is_configured": true, 00:12:34.130 "data_offset": 2048, 00:12:34.130 "data_size": 63488 00:12:34.130 }, 00:12:34.130 { 00:12:34.130 "name": "BaseBdev2", 00:12:34.130 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:34.130 "is_configured": true, 00:12:34.130 "data_offset": 2048, 00:12:34.130 "data_size": 63488 00:12:34.130 } 00:12:34.130 ] 00:12:34.130 }' 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.130 17:04:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.698 [2024-11-20 17:04:58.352234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.698 [2024-11-20 17:04:58.352279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.698 [2024-11-20 17:04:58.352369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.698 [2024-11-20 17:04:58.352456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.698 [2024-11-20 17:04:58.352472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.698 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:34.956 /dev/nbd0 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.957 1+0 records in 00:12:34.957 1+0 records out 00:12:34.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214033 s, 19.1 MB/s 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.957 17:04:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:35.215 /dev/nbd1 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.215 1+0 records in 00:12:35.215 1+0 records out 00:12:35.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355038 s, 11.5 MB/s 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:35.215 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.216 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:35.216 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:35.216 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.216 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.216 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.474 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.737 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.997 [2024-11-20 17:04:59.793810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.997 [2024-11-20 17:04:59.793870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.997 [2024-11-20 17:04:59.793906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:35.997 [2024-11-20 17:04:59.793922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.997 [2024-11-20 17:04:59.796800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.997 [2024-11-20 17:04:59.796847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.997 [2024-11-20 17:04:59.796956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:35.997 [2024-11-20 17:04:59.797020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.997 [2024-11-20 17:04:59.797190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.997 spare 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.997 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.256 [2024-11-20 17:04:59.897302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:36.256 [2024-11-20 17:04:59.897336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.256 [2024-11-20 17:04:59.897629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:36.256 [2024-11-20 17:04:59.897869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:36.256 [2024-11-20 17:04:59.897906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:36.256 [2024-11-20 17:04:59.898118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.256 "name": "raid_bdev1", 00:12:36.256 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:36.256 "strip_size_kb": 0, 00:12:36.256 "state": "online", 00:12:36.256 "raid_level": "raid1", 00:12:36.256 "superblock": true, 00:12:36.256 "num_base_bdevs": 2, 00:12:36.256 "num_base_bdevs_discovered": 2, 00:12:36.256 "num_base_bdevs_operational": 2, 00:12:36.256 "base_bdevs_list": [ 00:12:36.256 { 00:12:36.256 "name": "spare", 00:12:36.256 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:36.256 "is_configured": true, 00:12:36.256 "data_offset": 2048, 00:12:36.256 "data_size": 63488 00:12:36.256 }, 00:12:36.256 { 00:12:36.256 "name": "BaseBdev2", 00:12:36.256 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:36.256 "is_configured": true, 00:12:36.256 "data_offset": 2048, 00:12:36.256 "data_size": 63488 00:12:36.256 } 00:12:36.256 ] 00:12:36.256 }' 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.256 17:04:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.822 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.822 "name": "raid_bdev1", 00:12:36.822 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:36.822 "strip_size_kb": 0, 00:12:36.822 "state": "online", 00:12:36.822 "raid_level": "raid1", 00:12:36.822 "superblock": true, 00:12:36.822 "num_base_bdevs": 2, 00:12:36.822 "num_base_bdevs_discovered": 2, 00:12:36.822 "num_base_bdevs_operational": 2, 00:12:36.822 "base_bdevs_list": [ 00:12:36.822 { 00:12:36.822 "name": "spare", 00:12:36.823 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:36.823 "is_configured": true, 00:12:36.823 "data_offset": 2048, 00:12:36.823 "data_size": 63488 00:12:36.823 }, 00:12:36.823 { 00:12:36.823 "name": "BaseBdev2", 00:12:36.823 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:36.823 "is_configured": true, 00:12:36.823 "data_offset": 2048, 00:12:36.823 "data_size": 63488 00:12:36.823 } 00:12:36.823 ] 00:12:36.823 }' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.823 [2024-11-20 17:05:00.602325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.823 "name": "raid_bdev1", 00:12:36.823 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:36.823 "strip_size_kb": 0, 00:12:36.823 "state": "online", 00:12:36.823 "raid_level": "raid1", 00:12:36.823 "superblock": true, 00:12:36.823 "num_base_bdevs": 2, 00:12:36.823 "num_base_bdevs_discovered": 1, 00:12:36.823 "num_base_bdevs_operational": 1, 00:12:36.823 "base_bdevs_list": [ 00:12:36.823 { 00:12:36.823 "name": null, 00:12:36.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.823 "is_configured": false, 00:12:36.823 "data_offset": 0, 00:12:36.823 "data_size": 63488 00:12:36.823 }, 00:12:36.823 { 00:12:36.823 "name": "BaseBdev2", 00:12:36.823 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:36.823 "is_configured": true, 00:12:36.823 "data_offset": 2048, 00:12:36.823 "data_size": 63488 00:12:36.823 } 00:12:36.823 ] 00:12:36.823 }' 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.823 17:05:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.390 17:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.390 17:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.390 17:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.390 [2024-11-20 17:05:01.114469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.390 [2024-11-20 17:05:01.114827] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:37.390 [2024-11-20 17:05:01.114862] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:37.390 [2024-11-20 17:05:01.114911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.390 [2024-11-20 17:05:01.131381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:37.390 17:05:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.390 17:05:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:37.390 [2024-11-20 17:05:01.134057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.325 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.583 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.583 "name": "raid_bdev1", 00:12:38.583 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:38.583 "strip_size_kb": 0, 00:12:38.583 "state": "online", 00:12:38.583 "raid_level": "raid1", 00:12:38.583 "superblock": true, 00:12:38.583 "num_base_bdevs": 2, 00:12:38.583 "num_base_bdevs_discovered": 2, 00:12:38.583 "num_base_bdevs_operational": 2, 00:12:38.583 "process": { 00:12:38.583 "type": "rebuild", 00:12:38.583 "target": "spare", 00:12:38.583 "progress": { 00:12:38.583 "blocks": 20480, 00:12:38.583 "percent": 32 00:12:38.583 } 00:12:38.583 }, 00:12:38.583 "base_bdevs_list": [ 00:12:38.584 { 00:12:38.584 "name": "spare", 00:12:38.584 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:38.584 "is_configured": true, 00:12:38.584 "data_offset": 2048, 00:12:38.584 "data_size": 63488 00:12:38.584 }, 00:12:38.584 { 00:12:38.584 "name": "BaseBdev2", 00:12:38.584 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:38.584 "is_configured": true, 00:12:38.584 "data_offset": 2048, 00:12:38.584 "data_size": 63488 00:12:38.584 } 00:12:38.584 ] 00:12:38.584 }' 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.584 [2024-11-20 17:05:02.303250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.584 [2024-11-20 17:05:02.341912] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.584 [2024-11-20 17:05:02.341996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.584 [2024-11-20 17:05:02.342017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.584 [2024-11-20 17:05:02.342035] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.584 "name": "raid_bdev1", 00:12:38.584 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:38.584 "strip_size_kb": 0, 00:12:38.584 "state": "online", 00:12:38.584 "raid_level": "raid1", 00:12:38.584 "superblock": true, 00:12:38.584 "num_base_bdevs": 2, 00:12:38.584 "num_base_bdevs_discovered": 1, 00:12:38.584 "num_base_bdevs_operational": 1, 00:12:38.584 "base_bdevs_list": [ 00:12:38.584 { 00:12:38.584 "name": null, 00:12:38.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.584 "is_configured": false, 00:12:38.584 "data_offset": 0, 00:12:38.584 "data_size": 63488 00:12:38.584 }, 00:12:38.584 { 00:12:38.584 "name": "BaseBdev2", 00:12:38.584 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:38.584 "is_configured": true, 00:12:38.584 "data_offset": 2048, 00:12:38.584 "data_size": 63488 00:12:38.584 } 00:12:38.584 ] 00:12:38.584 }' 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.584 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.150 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:39.150 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.150 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.151 [2024-11-20 17:05:02.900097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:39.151 [2024-11-20 17:05:02.900374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.151 [2024-11-20 17:05:02.900412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:39.151 [2024-11-20 17:05:02.900430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.151 [2024-11-20 17:05:02.901044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.151 [2024-11-20 17:05:02.901076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:39.151 [2024-11-20 17:05:02.901240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:39.151 [2024-11-20 17:05:02.901262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:39.151 [2024-11-20 17:05:02.901274] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:39.151 [2024-11-20 17:05:02.901308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.151 [2024-11-20 17:05:02.915994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:39.151 spare 00:12:39.151 17:05:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.151 17:05:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:39.151 [2024-11-20 17:05:02.918665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.086 17:05:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.345 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.345 "name": "raid_bdev1", 00:12:40.345 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:40.345 "strip_size_kb": 0, 00:12:40.345 "state": "online", 00:12:40.345 "raid_level": "raid1", 00:12:40.345 "superblock": true, 00:12:40.345 "num_base_bdevs": 2, 00:12:40.345 "num_base_bdevs_discovered": 2, 00:12:40.345 "num_base_bdevs_operational": 2, 00:12:40.345 "process": { 00:12:40.345 "type": "rebuild", 00:12:40.345 "target": "spare", 00:12:40.345 "progress": { 00:12:40.345 "blocks": 20480, 00:12:40.345 "percent": 32 00:12:40.345 } 00:12:40.345 }, 00:12:40.345 "base_bdevs_list": [ 00:12:40.345 { 00:12:40.345 "name": "spare", 00:12:40.345 "uuid": "00ec16f2-58de-5f64-bb18-7f19829dd860", 00:12:40.345 "is_configured": true, 00:12:40.345 "data_offset": 2048, 00:12:40.345 "data_size": 63488 00:12:40.345 }, 00:12:40.345 { 00:12:40.345 "name": "BaseBdev2", 00:12:40.345 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:40.345 "is_configured": true, 00:12:40.345 "data_offset": 2048, 00:12:40.345 "data_size": 63488 00:12:40.345 } 00:12:40.345 ] 00:12:40.345 }' 00:12:40.345 17:05:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.345 [2024-11-20 17:05:04.087890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.345 [2024-11-20 17:05:04.126839] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.345 [2024-11-20 17:05:04.126937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.345 [2024-11-20 17:05:04.126996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.345 [2024-11-20 17:05:04.127007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.345 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.604 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.604 "name": "raid_bdev1", 00:12:40.604 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:40.604 "strip_size_kb": 0, 00:12:40.604 "state": "online", 00:12:40.604 "raid_level": "raid1", 00:12:40.604 "superblock": true, 00:12:40.604 "num_base_bdevs": 2, 00:12:40.604 "num_base_bdevs_discovered": 1, 00:12:40.604 "num_base_bdevs_operational": 1, 00:12:40.604 "base_bdevs_list": [ 00:12:40.604 { 00:12:40.604 "name": null, 00:12:40.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.604 "is_configured": false, 00:12:40.604 "data_offset": 0, 00:12:40.604 "data_size": 63488 00:12:40.604 }, 00:12:40.604 { 00:12:40.604 "name": "BaseBdev2", 00:12:40.604 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:40.604 "is_configured": true, 00:12:40.604 "data_offset": 2048, 00:12:40.604 "data_size": 63488 00:12:40.604 } 00:12:40.604 ] 00:12:40.604 }' 00:12:40.604 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.604 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.868 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.868 "name": "raid_bdev1", 00:12:40.868 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:40.868 "strip_size_kb": 0, 00:12:40.868 "state": "online", 00:12:40.868 "raid_level": "raid1", 00:12:40.868 "superblock": true, 00:12:40.868 "num_base_bdevs": 2, 00:12:40.868 "num_base_bdevs_discovered": 1, 00:12:40.868 "num_base_bdevs_operational": 1, 00:12:40.868 "base_bdevs_list": [ 00:12:40.868 { 00:12:40.868 "name": null, 00:12:40.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.868 "is_configured": false, 00:12:40.868 "data_offset": 0, 00:12:40.868 "data_size": 63488 00:12:40.868 }, 00:12:40.868 { 00:12:40.868 "name": "BaseBdev2", 00:12:40.868 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:40.868 "is_configured": true, 00:12:40.868 "data_offset": 2048, 00:12:40.868 "data_size": 63488 00:12:40.868 } 00:12:40.868 ] 00:12:40.868 }' 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 [2024-11-20 17:05:04.862931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.134 [2024-11-20 17:05:04.863013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.134 [2024-11-20 17:05:04.863064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:41.134 [2024-11-20 17:05:04.863092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.134 [2024-11-20 17:05:04.863710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.134 [2024-11-20 17:05:04.863752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.134 [2024-11-20 17:05:04.863867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:41.134 [2024-11-20 17:05:04.863888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:41.134 [2024-11-20 17:05:04.863902] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:41.134 [2024-11-20 17:05:04.863914] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:41.134 BaseBdev1 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.134 17:05:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.069 17:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.070 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.070 "name": "raid_bdev1", 00:12:42.070 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:42.070 "strip_size_kb": 0, 00:12:42.070 "state": "online", 00:12:42.070 "raid_level": "raid1", 00:12:42.070 "superblock": true, 00:12:42.070 "num_base_bdevs": 2, 00:12:42.070 "num_base_bdevs_discovered": 1, 00:12:42.070 "num_base_bdevs_operational": 1, 00:12:42.070 "base_bdevs_list": [ 00:12:42.070 { 00:12:42.070 "name": null, 00:12:42.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.070 "is_configured": false, 00:12:42.070 "data_offset": 0, 00:12:42.070 "data_size": 63488 00:12:42.070 }, 00:12:42.070 { 00:12:42.070 "name": "BaseBdev2", 00:12:42.070 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:42.070 "is_configured": true, 00:12:42.070 "data_offset": 2048, 00:12:42.070 "data_size": 63488 00:12:42.070 } 00:12:42.070 ] 00:12:42.070 }' 00:12:42.070 17:05:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.070 17:05:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.637 "name": "raid_bdev1", 00:12:42.637 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:42.637 "strip_size_kb": 0, 00:12:42.637 "state": "online", 00:12:42.637 "raid_level": "raid1", 00:12:42.637 "superblock": true, 00:12:42.637 "num_base_bdevs": 2, 00:12:42.637 "num_base_bdevs_discovered": 1, 00:12:42.637 "num_base_bdevs_operational": 1, 00:12:42.637 "base_bdevs_list": [ 00:12:42.637 { 00:12:42.637 "name": null, 00:12:42.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.637 "is_configured": false, 00:12:42.637 "data_offset": 0, 00:12:42.637 "data_size": 63488 00:12:42.637 }, 00:12:42.637 { 00:12:42.637 "name": "BaseBdev2", 00:12:42.637 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:42.637 "is_configured": true, 00:12:42.637 "data_offset": 2048, 00:12:42.637 "data_size": 63488 00:12:42.637 } 00:12:42.637 ] 00:12:42.637 }' 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.637 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.895 [2024-11-20 17:05:06.559471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.895 [2024-11-20 17:05:06.559685] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:42.895 [2024-11-20 17:05:06.559714] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:42.895 request: 00:12:42.895 { 00:12:42.895 "base_bdev": "BaseBdev1", 00:12:42.895 "raid_bdev": "raid_bdev1", 00:12:42.895 "method": "bdev_raid_add_base_bdev", 00:12:42.895 "req_id": 1 00:12:42.895 } 00:12:42.895 Got JSON-RPC error response 00:12:42.895 response: 00:12:42.895 { 00:12:42.895 "code": -22, 00:12:42.895 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:42.895 } 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.895 17:05:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.831 "name": "raid_bdev1", 00:12:43.831 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:43.831 "strip_size_kb": 0, 00:12:43.831 "state": "online", 00:12:43.831 "raid_level": "raid1", 00:12:43.831 "superblock": true, 00:12:43.831 "num_base_bdevs": 2, 00:12:43.831 "num_base_bdevs_discovered": 1, 00:12:43.831 "num_base_bdevs_operational": 1, 00:12:43.831 "base_bdevs_list": [ 00:12:43.831 { 00:12:43.831 "name": null, 00:12:43.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.831 "is_configured": false, 00:12:43.831 "data_offset": 0, 00:12:43.831 "data_size": 63488 00:12:43.831 }, 00:12:43.831 { 00:12:43.831 "name": "BaseBdev2", 00:12:43.831 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:43.831 "is_configured": true, 00:12:43.831 "data_offset": 2048, 00:12:43.831 "data_size": 63488 00:12:43.831 } 00:12:43.831 ] 00:12:43.831 }' 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.831 17:05:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.398 "name": "raid_bdev1", 00:12:44.398 "uuid": "da4d640a-f988-46e9-8acd-c1aec5fcc2d9", 00:12:44.398 "strip_size_kb": 0, 00:12:44.398 "state": "online", 00:12:44.398 "raid_level": "raid1", 00:12:44.398 "superblock": true, 00:12:44.398 "num_base_bdevs": 2, 00:12:44.398 "num_base_bdevs_discovered": 1, 00:12:44.398 "num_base_bdevs_operational": 1, 00:12:44.398 "base_bdevs_list": [ 00:12:44.398 { 00:12:44.398 "name": null, 00:12:44.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.398 "is_configured": false, 00:12:44.398 "data_offset": 0, 00:12:44.398 "data_size": 63488 00:12:44.398 }, 00:12:44.398 { 00:12:44.398 "name": "BaseBdev2", 00:12:44.398 "uuid": "a3573a4a-44d9-5872-9689-03069cae4c6a", 00:12:44.398 "is_configured": true, 00:12:44.398 "data_offset": 2048, 00:12:44.398 "data_size": 63488 00:12:44.398 } 00:12:44.398 ] 00:12:44.398 }' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75732 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75732 ']' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75732 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.398 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75732 00:12:44.657 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.657 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.657 killing process with pid 75732 00:12:44.657 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75732' 00:12:44.657 Received shutdown signal, test time was about 60.000000 seconds 00:12:44.657 00:12:44.657 Latency(us) 00:12:44.657 [2024-11-20T17:05:08.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.657 [2024-11-20T17:05:08.526Z] =================================================================================================================== 00:12:44.657 [2024-11-20T17:05:08.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:44.657 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75732 00:12:44.657 [2024-11-20 17:05:08.276919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.657 17:05:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75732 00:12:44.657 [2024-11-20 17:05:08.277062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.657 [2024-11-20 17:05:08.277135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.657 [2024-11-20 17:05:08.277156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:44.657 [2024-11-20 17:05:08.511562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:46.031 ************************************ 00:12:46.031 END TEST raid_rebuild_test_sb 00:12:46.031 ************************************ 00:12:46.031 00:12:46.031 real 0m26.083s 00:12:46.031 user 0m32.067s 00:12:46.031 sys 0m3.676s 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.031 17:05:09 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:46.031 17:05:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:46.031 17:05:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.031 17:05:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.031 ************************************ 00:12:46.031 START TEST raid_rebuild_test_io 00:12:46.031 ************************************ 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76484 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76484 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76484 ']' 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.031 17:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.031 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:46.031 Zero copy mechanism will not be used. 00:12:46.031 [2024-11-20 17:05:09.649695] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:12:46.031 [2024-11-20 17:05:09.649928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76484 ] 00:12:46.031 [2024-11-20 17:05:09.832035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.290 [2024-11-20 17:05:09.944927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.290 [2024-11-20 17:05:10.139226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.290 [2024-11-20 17:05:10.139265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.857 BaseBdev1_malloc 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.857 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.857 [2024-11-20 17:05:10.646505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:46.857 [2024-11-20 17:05:10.646586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.857 [2024-11-20 17:05:10.646615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:46.857 [2024-11-20 17:05:10.646633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.857 [2024-11-20 17:05:10.649375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.857 [2024-11-20 17:05:10.649438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.858 BaseBdev1 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.858 BaseBdev2_malloc 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.858 [2024-11-20 17:05:10.692417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:46.858 [2024-11-20 17:05:10.692687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.858 [2024-11-20 17:05:10.692728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:46.858 [2024-11-20 17:05:10.692747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.858 [2024-11-20 17:05:10.695594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.858 [2024-11-20 17:05:10.695643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:46.858 BaseBdev2 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.858 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.117 spare_malloc 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.117 spare_delay 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.117 [2024-11-20 17:05:10.767375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:47.117 [2024-11-20 17:05:10.767640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.117 [2024-11-20 17:05:10.767713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:47.117 [2024-11-20 17:05:10.767738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.117 [2024-11-20 17:05:10.770572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.117 [2024-11-20 17:05:10.770635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:47.117 spare 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.117 [2024-11-20 17:05:10.775513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.117 [2024-11-20 17:05:10.778087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.117 [2024-11-20 17:05:10.778224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:47.117 [2024-11-20 17:05:10.778248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:47.117 [2024-11-20 17:05:10.778555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:47.117 [2024-11-20 17:05:10.778838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:47.117 [2024-11-20 17:05:10.778858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:47.117 [2024-11-20 17:05:10.779053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.117 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.118 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.118 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.118 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.118 "name": "raid_bdev1", 00:12:47.118 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:47.118 "strip_size_kb": 0, 00:12:47.118 "state": "online", 00:12:47.118 "raid_level": "raid1", 00:12:47.118 "superblock": false, 00:12:47.118 "num_base_bdevs": 2, 00:12:47.118 "num_base_bdevs_discovered": 2, 00:12:47.118 "num_base_bdevs_operational": 2, 00:12:47.118 "base_bdevs_list": [ 00:12:47.118 { 00:12:47.118 "name": "BaseBdev1", 00:12:47.118 "uuid": "050a7294-94ea-50ad-b9dc-cc4994e6f717", 00:12:47.118 "is_configured": true, 00:12:47.118 "data_offset": 0, 00:12:47.118 "data_size": 65536 00:12:47.118 }, 00:12:47.118 { 00:12:47.118 "name": "BaseBdev2", 00:12:47.118 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:47.118 "is_configured": true, 00:12:47.118 "data_offset": 0, 00:12:47.118 "data_size": 65536 00:12:47.118 } 00:12:47.118 ] 00:12:47.118 }' 00:12:47.118 17:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.118 17:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 [2024-11-20 17:05:11.304057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 [2024-11-20 17:05:11.403690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.684 "name": "raid_bdev1", 00:12:47.684 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:47.684 "strip_size_kb": 0, 00:12:47.684 "state": "online", 00:12:47.684 "raid_level": "raid1", 00:12:47.684 "superblock": false, 00:12:47.684 "num_base_bdevs": 2, 00:12:47.684 "num_base_bdevs_discovered": 1, 00:12:47.684 "num_base_bdevs_operational": 1, 00:12:47.684 "base_bdevs_list": [ 00:12:47.684 { 00:12:47.684 "name": null, 00:12:47.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.684 "is_configured": false, 00:12:47.684 "data_offset": 0, 00:12:47.684 "data_size": 65536 00:12:47.684 }, 00:12:47.684 { 00:12:47.684 "name": "BaseBdev2", 00:12:47.684 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:47.684 "is_configured": true, 00:12:47.684 "data_offset": 0, 00:12:47.684 "data_size": 65536 00:12:47.684 } 00:12:47.684 ] 00:12:47.684 }' 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.684 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.684 [2024-11-20 17:05:11.531858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:47.684 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.684 Zero copy mechanism will not be used. 00:12:47.684 Running I/O for 60 seconds... 00:12:48.250 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.250 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.250 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.250 [2024-11-20 17:05:11.938699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.250 17:05:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.250 17:05:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:48.250 [2024-11-20 17:05:11.992050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:48.250 [2024-11-20 17:05:11.994463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.250 [2024-11-20 17:05:12.102676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.250 [2024-11-20 17:05:12.103228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:48.509 [2024-11-20 17:05:12.226659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:48.509 [2024-11-20 17:05:12.226975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.026 146.00 IOPS, 438.00 MiB/s [2024-11-20T17:05:12.895Z] [2024-11-20 17:05:12.671061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.286 17:05:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.286 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.286 "name": "raid_bdev1", 00:12:49.286 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:49.286 "strip_size_kb": 0, 00:12:49.286 "state": "online", 00:12:49.286 "raid_level": "raid1", 00:12:49.286 "superblock": false, 00:12:49.286 "num_base_bdevs": 2, 00:12:49.286 "num_base_bdevs_discovered": 2, 00:12:49.286 "num_base_bdevs_operational": 2, 00:12:49.286 "process": { 00:12:49.286 "type": "rebuild", 00:12:49.286 "target": "spare", 00:12:49.286 "progress": { 00:12:49.286 "blocks": 12288, 00:12:49.286 "percent": 18 00:12:49.286 } 00:12:49.286 }, 00:12:49.286 "base_bdevs_list": [ 00:12:49.286 { 00:12:49.286 "name": "spare", 00:12:49.287 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:49.287 "is_configured": true, 00:12:49.287 "data_offset": 0, 00:12:49.287 "data_size": 65536 00:12:49.287 }, 00:12:49.287 { 00:12:49.287 "name": "BaseBdev2", 00:12:49.287 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:49.287 "is_configured": true, 00:12:49.287 "data_offset": 0, 00:12:49.287 "data_size": 65536 00:12:49.287 } 00:12:49.287 ] 00:12:49.287 }' 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.287 [2024-11-20 17:05:13.038737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.287 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.287 [2024-11-20 17:05:13.133037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.547 [2024-11-20 17:05:13.167012] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.547 [2024-11-20 17:05:13.168808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.547 [2024-11-20 17:05:13.169019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.547 [2024-11-20 17:05:13.169057] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.547 [2024-11-20 17:05:13.221269] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.547 "name": "raid_bdev1", 00:12:49.547 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:49.547 "strip_size_kb": 0, 00:12:49.547 "state": "online", 00:12:49.547 "raid_level": "raid1", 00:12:49.547 "superblock": false, 00:12:49.547 "num_base_bdevs": 2, 00:12:49.547 "num_base_bdevs_discovered": 1, 00:12:49.547 "num_base_bdevs_operational": 1, 00:12:49.547 "base_bdevs_list": [ 00:12:49.547 { 00:12:49.547 "name": null, 00:12:49.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.547 "is_configured": false, 00:12:49.547 "data_offset": 0, 00:12:49.547 "data_size": 65536 00:12:49.547 }, 00:12:49.547 { 00:12:49.547 "name": "BaseBdev2", 00:12:49.547 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:49.547 "is_configured": true, 00:12:49.547 "data_offset": 0, 00:12:49.547 "data_size": 65536 00:12:49.547 } 00:12:49.547 ] 00:12:49.547 }' 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.547 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.064 141.00 IOPS, 423.00 MiB/s [2024-11-20T17:05:13.933Z] 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.064 "name": "raid_bdev1", 00:12:50.064 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:50.064 "strip_size_kb": 0, 00:12:50.064 "state": "online", 00:12:50.064 "raid_level": "raid1", 00:12:50.064 "superblock": false, 00:12:50.064 "num_base_bdevs": 2, 00:12:50.064 "num_base_bdevs_discovered": 1, 00:12:50.064 "num_base_bdevs_operational": 1, 00:12:50.064 "base_bdevs_list": [ 00:12:50.064 { 00:12:50.064 "name": null, 00:12:50.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.064 "is_configured": false, 00:12:50.064 "data_offset": 0, 00:12:50.064 "data_size": 65536 00:12:50.064 }, 00:12:50.064 { 00:12:50.064 "name": "BaseBdev2", 00:12:50.064 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:50.064 "is_configured": true, 00:12:50.064 "data_offset": 0, 00:12:50.064 "data_size": 65536 00:12:50.064 } 00:12:50.064 ] 00:12:50.064 }' 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.064 [2024-11-20 17:05:13.896740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.064 17:05:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:50.064 [2024-11-20 17:05:13.923933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:50.064 [2024-11-20 17:05:13.926602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.364 [2024-11-20 17:05:14.038436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.365 [2024-11-20 17:05:14.039028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.623 [2024-11-20 17:05:14.264359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.623 [2024-11-20 17:05:14.265055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.881 151.33 IOPS, 454.00 MiB/s [2024-11-20T17:05:14.750Z] [2024-11-20 17:05:14.638563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.139 [2024-11-20 17:05:14.857705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.139 [2024-11-20 17:05:14.858008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.139 "name": "raid_bdev1", 00:12:51.139 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:51.139 "strip_size_kb": 0, 00:12:51.139 "state": "online", 00:12:51.139 "raid_level": "raid1", 00:12:51.139 "superblock": false, 00:12:51.139 "num_base_bdevs": 2, 00:12:51.139 "num_base_bdevs_discovered": 2, 00:12:51.139 "num_base_bdevs_operational": 2, 00:12:51.139 "process": { 00:12:51.139 "type": "rebuild", 00:12:51.139 "target": "spare", 00:12:51.139 "progress": { 00:12:51.139 "blocks": 10240, 00:12:51.139 "percent": 15 00:12:51.139 } 00:12:51.139 }, 00:12:51.139 "base_bdevs_list": [ 00:12:51.139 { 00:12:51.139 "name": "spare", 00:12:51.139 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:51.139 "is_configured": true, 00:12:51.139 "data_offset": 0, 00:12:51.139 "data_size": 65536 00:12:51.139 }, 00:12:51.139 { 00:12:51.139 "name": "BaseBdev2", 00:12:51.139 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:51.139 "is_configured": true, 00:12:51.139 "data_offset": 0, 00:12:51.139 "data_size": 65536 00:12:51.139 } 00:12:51.139 ] 00:12:51.139 }' 00:12:51.139 17:05:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.398 "name": "raid_bdev1", 00:12:51.398 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:51.398 "strip_size_kb": 0, 00:12:51.398 "state": "online", 00:12:51.398 "raid_level": "raid1", 00:12:51.398 "superblock": false, 00:12:51.398 "num_base_bdevs": 2, 00:12:51.398 "num_base_bdevs_discovered": 2, 00:12:51.398 "num_base_bdevs_operational": 2, 00:12:51.398 "process": { 00:12:51.398 "type": "rebuild", 00:12:51.398 "target": "spare", 00:12:51.398 "progress": { 00:12:51.398 "blocks": 12288, 00:12:51.398 "percent": 18 00:12:51.398 } 00:12:51.398 }, 00:12:51.398 "base_bdevs_list": [ 00:12:51.398 { 00:12:51.398 "name": "spare", 00:12:51.398 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:51.398 "is_configured": true, 00:12:51.398 "data_offset": 0, 00:12:51.398 "data_size": 65536 00:12:51.398 }, 00:12:51.398 { 00:12:51.398 "name": "BaseBdev2", 00:12:51.398 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:51.398 "is_configured": true, 00:12:51.398 "data_offset": 0, 00:12:51.398 "data_size": 65536 00:12:51.398 } 00:12:51.398 ] 00:12:51.398 }' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.398 17:05:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:51.656 [2024-11-20 17:05:15.342223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:51.914 127.00 IOPS, 381.00 MiB/s [2024-11-20T17:05:15.783Z] [2024-11-20 17:05:15.662182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:52.173 [2024-11-20 17:05:15.780207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:52.432 [2024-11-20 17:05:16.203160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.432 "name": "raid_bdev1", 00:12:52.432 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:52.432 "strip_size_kb": 0, 00:12:52.432 "state": "online", 00:12:52.432 "raid_level": "raid1", 00:12:52.432 "superblock": false, 00:12:52.432 "num_base_bdevs": 2, 00:12:52.432 "num_base_bdevs_discovered": 2, 00:12:52.432 "num_base_bdevs_operational": 2, 00:12:52.432 "process": { 00:12:52.432 "type": "rebuild", 00:12:52.432 "target": "spare", 00:12:52.432 "progress": { 00:12:52.432 "blocks": 28672, 00:12:52.432 "percent": 43 00:12:52.432 } 00:12:52.432 }, 00:12:52.432 "base_bdevs_list": [ 00:12:52.432 { 00:12:52.432 "name": "spare", 00:12:52.432 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:52.432 "is_configured": true, 00:12:52.432 "data_offset": 0, 00:12:52.432 "data_size": 65536 00:12:52.432 }, 00:12:52.432 { 00:12:52.432 "name": "BaseBdev2", 00:12:52.432 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:52.432 "is_configured": true, 00:12:52.432 "data_offset": 0, 00:12:52.432 "data_size": 65536 00:12:52.432 } 00:12:52.432 ] 00:12:52.432 }' 00:12:52.432 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.690 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.690 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.690 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.690 17:05:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.627 110.40 IOPS, 331.20 MiB/s [2024-11-20T17:05:17.496Z] [2024-11-20 17:05:17.221649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.627 "name": "raid_bdev1", 00:12:53.627 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:53.627 "strip_size_kb": 0, 00:12:53.627 "state": "online", 00:12:53.627 "raid_level": "raid1", 00:12:53.627 "superblock": false, 00:12:53.627 "num_base_bdevs": 2, 00:12:53.627 "num_base_bdevs_discovered": 2, 00:12:53.627 "num_base_bdevs_operational": 2, 00:12:53.627 "process": { 00:12:53.627 "type": "rebuild", 00:12:53.627 "target": "spare", 00:12:53.627 "progress": { 00:12:53.627 "blocks": 47104, 00:12:53.627 "percent": 71 00:12:53.627 } 00:12:53.627 }, 00:12:53.627 "base_bdevs_list": [ 00:12:53.627 { 00:12:53.627 "name": "spare", 00:12:53.627 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:53.627 "is_configured": true, 00:12:53.627 "data_offset": 0, 00:12:53.627 "data_size": 65536 00:12:53.627 }, 00:12:53.627 { 00:12:53.627 "name": "BaseBdev2", 00:12:53.627 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:53.627 "is_configured": true, 00:12:53.627 "data_offset": 0, 00:12:53.627 "data_size": 65536 00:12:53.627 } 00:12:53.627 ] 00:12:53.627 }' 00:12:53.627 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.885 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.885 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.885 97.83 IOPS, 293.50 MiB/s [2024-11-20T17:05:17.754Z] 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.886 17:05:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.886 [2024-11-20 17:05:17.667945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:54.144 [2024-11-20 17:05:17.972374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:54.144 [2024-11-20 17:05:17.973260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:54.402 [2024-11-20 17:05:18.183576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:54.402 [2024-11-20 17:05:18.184038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:54.661 [2024-11-20 17:05:18.518557] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:54.920 90.57 IOPS, 271.71 MiB/s [2024-11-20T17:05:18.789Z] 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.920 [2024-11-20 17:05:18.625138] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:54.920 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.920 "name": "raid_bdev1", 00:12:54.920 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:54.920 "strip_size_kb": 0, 00:12:54.920 "state": "online", 00:12:54.920 "raid_level": "raid1", 00:12:54.920 "superblock": false, 00:12:54.920 "num_base_bdevs": 2, 00:12:54.920 "num_base_bdevs_discovered": 2, 00:12:54.920 "num_base_bdevs_operational": 2, 00:12:54.920 "process": { 00:12:54.920 "type": "rebuild", 00:12:54.920 "target": "spare", 00:12:54.920 "progress": { 00:12:54.920 "blocks": 65536, 00:12:54.920 "percent": 100 00:12:54.920 } 00:12:54.920 }, 00:12:54.920 "base_bdevs_list": [ 00:12:54.920 { 00:12:54.920 "name": "spare", 00:12:54.920 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:54.920 "is_configured": true, 00:12:54.920 "data_offset": 0, 00:12:54.921 "data_size": 65536 00:12:54.921 }, 00:12:54.921 { 00:12:54.921 "name": "BaseBdev2", 00:12:54.921 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:54.921 "is_configured": true, 00:12:54.921 "data_offset": 0, 00:12:54.921 "data_size": 65536 00:12:54.921 } 00:12:54.921 ] 00:12:54.921 }' 00:12:54.921 [2024-11-20 17:05:18.628567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.921 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.921 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.921 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.921 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.921 17:05:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.115 82.38 IOPS, 247.12 MiB/s [2024-11-20T17:05:19.984Z] 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.115 "name": "raid_bdev1", 00:12:56.115 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:56.115 "strip_size_kb": 0, 00:12:56.115 "state": "online", 00:12:56.115 "raid_level": "raid1", 00:12:56.115 "superblock": false, 00:12:56.115 "num_base_bdevs": 2, 00:12:56.115 "num_base_bdevs_discovered": 2, 00:12:56.115 "num_base_bdevs_operational": 2, 00:12:56.115 "base_bdevs_list": [ 00:12:56.115 { 00:12:56.115 "name": "spare", 00:12:56.115 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:56.115 "is_configured": true, 00:12:56.115 "data_offset": 0, 00:12:56.115 "data_size": 65536 00:12:56.115 }, 00:12:56.115 { 00:12:56.115 "name": "BaseBdev2", 00:12:56.115 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:56.115 "is_configured": true, 00:12:56.115 "data_offset": 0, 00:12:56.115 "data_size": 65536 00:12:56.115 } 00:12:56.115 ] 00:12:56.115 }' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.115 "name": "raid_bdev1", 00:12:56.115 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:56.115 "strip_size_kb": 0, 00:12:56.115 "state": "online", 00:12:56.115 "raid_level": "raid1", 00:12:56.115 "superblock": false, 00:12:56.115 "num_base_bdevs": 2, 00:12:56.115 "num_base_bdevs_discovered": 2, 00:12:56.115 "num_base_bdevs_operational": 2, 00:12:56.115 "base_bdevs_list": [ 00:12:56.115 { 00:12:56.115 "name": "spare", 00:12:56.115 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:56.115 "is_configured": true, 00:12:56.115 "data_offset": 0, 00:12:56.115 "data_size": 65536 00:12:56.115 }, 00:12:56.115 { 00:12:56.115 "name": "BaseBdev2", 00:12:56.115 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:56.115 "is_configured": true, 00:12:56.115 "data_offset": 0, 00:12:56.115 "data_size": 65536 00:12:56.115 } 00:12:56.115 ] 00:12:56.115 }' 00:12:56.115 17:05:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.374 "name": "raid_bdev1", 00:12:56.374 "uuid": "34707d6f-65ea-49df-8900-1f4e939b73a0", 00:12:56.374 "strip_size_kb": 0, 00:12:56.374 "state": "online", 00:12:56.374 "raid_level": "raid1", 00:12:56.374 "superblock": false, 00:12:56.374 "num_base_bdevs": 2, 00:12:56.374 "num_base_bdevs_discovered": 2, 00:12:56.374 "num_base_bdevs_operational": 2, 00:12:56.374 "base_bdevs_list": [ 00:12:56.374 { 00:12:56.374 "name": "spare", 00:12:56.374 "uuid": "2b70793b-1dfa-528d-a78d-db07793c7b9e", 00:12:56.374 "is_configured": true, 00:12:56.374 "data_offset": 0, 00:12:56.374 "data_size": 65536 00:12:56.374 }, 00:12:56.374 { 00:12:56.374 "name": "BaseBdev2", 00:12:56.374 "uuid": "ccd58849-0fd9-5f69-8172-5f19cbe107c6", 00:12:56.374 "is_configured": true, 00:12:56.374 "data_offset": 0, 00:12:56.374 "data_size": 65536 00:12:56.374 } 00:12:56.374 ] 00:12:56.374 }' 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.374 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.941 76.11 IOPS, 228.33 MiB/s [2024-11-20T17:05:20.810Z] 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.941 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.941 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.941 [2024-11-20 17:05:20.584441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.941 [2024-11-20 17:05:20.584649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.941 00:12:56.941 Latency(us) 00:12:56.941 [2024-11-20T17:05:20.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.941 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:56.941 raid_bdev1 : 9.15 75.63 226.89 0.00 0.00 18061.72 260.65 119156.36 00:12:56.941 [2024-11-20T17:05:20.810Z] =================================================================================================================== 00:12:56.941 [2024-11-20T17:05:20.810Z] Total : 75.63 226.89 0.00 0.00 18061.72 260.65 119156.36 00:12:56.941 [2024-11-20 17:05:20.702860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.941 { 00:12:56.941 "results": [ 00:12:56.941 { 00:12:56.941 "job": "raid_bdev1", 00:12:56.941 "core_mask": "0x1", 00:12:56.941 "workload": "randrw", 00:12:56.941 "percentage": 50, 00:12:56.941 "status": "finished", 00:12:56.941 "queue_depth": 2, 00:12:56.941 "io_size": 3145728, 00:12:56.941 "runtime": 9.149855, 00:12:56.941 "iops": 75.6296138026231, 00:12:56.941 "mibps": 226.88884140786928, 00:12:56.941 "io_failed": 0, 00:12:56.941 "io_timeout": 0, 00:12:56.941 "avg_latency_us": 18061.7201471361, 00:12:56.941 "min_latency_us": 260.6545454545454, 00:12:56.941 "max_latency_us": 119156.36363636363 00:12:56.941 } 00:12:56.942 ], 00:12:56.942 "core_count": 1 00:12:56.942 } 00:12:56.942 [2024-11-20 17:05:20.703138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.942 [2024-11-20 17:05:20.703253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.942 [2024-11-20 17:05:20.703274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.942 17:05:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:57.201 /dev/nbd0 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.201 1+0 records in 00:12:57.201 1+0 records out 00:12:57.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354746 s, 11.5 MB/s 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:57.201 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.459 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:57.459 /dev/nbd1 00:12:57.717 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.717 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.717 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.717 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:57.717 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.718 1+0 records in 00:12:57.718 1+0 records out 00:12:57.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322596 s, 12.7 MB/s 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.718 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.284 17:05:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76484 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76484 ']' 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76484 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76484 00:12:58.543 killing process with pid 76484 00:12:58.543 Received shutdown signal, test time was about 10.687369 seconds 00:12:58.543 00:12:58.543 Latency(us) 00:12:58.543 [2024-11-20T17:05:22.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.543 [2024-11-20T17:05:22.412Z] =================================================================================================================== 00:12:58.543 [2024-11-20T17:05:22.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76484' 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76484 00:12:58.543 [2024-11-20 17:05:22.221886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.543 17:05:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76484 00:12:58.543 [2024-11-20 17:05:22.402180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.920 ************************************ 00:12:59.920 END TEST raid_rebuild_test_io 00:12:59.920 ************************************ 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:59.920 00:12:59.920 real 0m13.885s 00:12:59.920 user 0m18.052s 00:12:59.920 sys 0m1.411s 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 17:05:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:59.920 17:05:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:59.920 17:05:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.920 17:05:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 ************************************ 00:12:59.920 START TEST raid_rebuild_test_sb_io 00:12:59.920 ************************************ 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:59.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76886 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76886 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76886 ']' 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.920 17:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 [2024-11-20 17:05:23.602938] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:12:59.920 [2024-11-20 17:05:23.603300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76886 ] 00:12:59.920 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.920 Zero copy mechanism will not be used. 00:13:00.178 [2024-11-20 17:05:23.788630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.178 [2024-11-20 17:05:23.901601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.437 [2024-11-20 17:05:24.100443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.437 [2024-11-20 17:05:24.100619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 BaseBdev1_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 [2024-11-20 17:05:24.620507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.004 [2024-11-20 17:05:24.620759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.004 [2024-11-20 17:05:24.620822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.004 [2024-11-20 17:05:24.620843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.004 [2024-11-20 17:05:24.623565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.004 [2024-11-20 17:05:24.623613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.004 BaseBdev1 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 BaseBdev2_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 [2024-11-20 17:05:24.674682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.004 [2024-11-20 17:05:24.674780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.004 [2024-11-20 17:05:24.674839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.004 [2024-11-20 17:05:24.674858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.004 [2024-11-20 17:05:24.677624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.004 [2024-11-20 17:05:24.677685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.004 BaseBdev2 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 spare_malloc 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 spare_delay 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.004 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.004 [2024-11-20 17:05:24.753322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.004 [2024-11-20 17:05:24.753407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.004 [2024-11-20 17:05:24.753438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:01.005 [2024-11-20 17:05:24.753455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.005 [2024-11-20 17:05:24.756335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.005 [2024-11-20 17:05:24.756563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.005 spare 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 [2024-11-20 17:05:24.761445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.005 [2024-11-20 17:05:24.764070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.005 [2024-11-20 17:05:24.764468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:01.005 [2024-11-20 17:05:24.764601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.005 [2024-11-20 17:05:24.764987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:01.005 [2024-11-20 17:05:24.765359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:01.005 [2024-11-20 17:05:24.765479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:01.005 [2024-11-20 17:05:24.765858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.005 "name": "raid_bdev1", 00:13:01.005 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:01.005 "strip_size_kb": 0, 00:13:01.005 "state": "online", 00:13:01.005 "raid_level": "raid1", 00:13:01.005 "superblock": true, 00:13:01.005 "num_base_bdevs": 2, 00:13:01.005 "num_base_bdevs_discovered": 2, 00:13:01.005 "num_base_bdevs_operational": 2, 00:13:01.005 "base_bdevs_list": [ 00:13:01.005 { 00:13:01.005 "name": "BaseBdev1", 00:13:01.005 "uuid": "a8762b50-08a1-5ae3-8c89-3a68a8a0df27", 00:13:01.005 "is_configured": true, 00:13:01.005 "data_offset": 2048, 00:13:01.005 "data_size": 63488 00:13:01.005 }, 00:13:01.005 { 00:13:01.005 "name": "BaseBdev2", 00:13:01.005 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:01.005 "is_configured": true, 00:13:01.005 "data_offset": 2048, 00:13:01.005 "data_size": 63488 00:13:01.005 } 00:13:01.005 ] 00:13:01.005 }' 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.005 17:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.584 [2024-11-20 17:05:25.274302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.584 [2024-11-20 17:05:25.377947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.584 "name": "raid_bdev1", 00:13:01.584 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:01.584 "strip_size_kb": 0, 00:13:01.584 "state": "online", 00:13:01.584 "raid_level": "raid1", 00:13:01.584 "superblock": true, 00:13:01.584 "num_base_bdevs": 2, 00:13:01.584 "num_base_bdevs_discovered": 1, 00:13:01.584 "num_base_bdevs_operational": 1, 00:13:01.584 "base_bdevs_list": [ 00:13:01.584 { 00:13:01.584 "name": null, 00:13:01.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.584 "is_configured": false, 00:13:01.584 "data_offset": 0, 00:13:01.584 "data_size": 63488 00:13:01.584 }, 00:13:01.584 { 00:13:01.584 "name": "BaseBdev2", 00:13:01.584 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:01.584 "is_configured": true, 00:13:01.584 "data_offset": 2048, 00:13:01.584 "data_size": 63488 00:13:01.584 } 00:13:01.584 ] 00:13:01.584 }' 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.584 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.852 [2024-11-20 17:05:25.485754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.852 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.852 Zero copy mechanism will not be used. 00:13:01.852 Running I/O for 60 seconds... 00:13:02.110 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.110 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.110 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.110 [2024-11-20 17:05:25.902626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.110 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.110 17:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:02.110 [2024-11-20 17:05:25.966330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:02.110 [2024-11-20 17:05:25.968968] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.368 [2024-11-20 17:05:26.098785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:02.368 [2024-11-20 17:05:26.099443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:02.627 [2024-11-20 17:05:26.311262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.627 [2024-11-20 17:05:26.311955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.886 230.00 IOPS, 690.00 MiB/s [2024-11-20T17:05:26.755Z] [2024-11-20 17:05:26.558812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.886 [2024-11-20 17:05:26.694956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.886 [2024-11-20 17:05:26.695475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.144 17:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.144 "name": "raid_bdev1", 00:13:03.144 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:03.144 "strip_size_kb": 0, 00:13:03.144 "state": "online", 00:13:03.144 "raid_level": "raid1", 00:13:03.144 "superblock": true, 00:13:03.144 "num_base_bdevs": 2, 00:13:03.144 "num_base_bdevs_discovered": 2, 00:13:03.144 "num_base_bdevs_operational": 2, 00:13:03.144 "process": { 00:13:03.144 "type": "rebuild", 00:13:03.144 "target": "spare", 00:13:03.144 "progress": { 00:13:03.144 "blocks": 12288, 00:13:03.144 "percent": 19 00:13:03.144 } 00:13:03.144 }, 00:13:03.144 "base_bdevs_list": [ 00:13:03.144 { 00:13:03.144 "name": "spare", 00:13:03.144 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:03.144 "is_configured": true, 00:13:03.144 "data_offset": 2048, 00:13:03.144 "data_size": 63488 00:13:03.144 }, 00:13:03.144 { 00:13:03.144 "name": "BaseBdev2", 00:13:03.144 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:03.144 "is_configured": true, 00:13:03.144 "data_offset": 2048, 00:13:03.144 "data_size": 63488 00:13:03.144 } 00:13:03.144 ] 00:13:03.144 }' 00:13:03.144 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.403 [2024-11-20 17:05:27.016673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.403 [2024-11-20 17:05:27.112830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.403 [2024-11-20 17:05:27.142554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:03.403 [2024-11-20 17:05:27.144191] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.403 [2024-11-20 17:05:27.153617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.403 [2024-11-20 17:05:27.153658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.403 [2024-11-20 17:05:27.153672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.403 [2024-11-20 17:05:27.185278] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.403 "name": "raid_bdev1", 00:13:03.403 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:03.403 "strip_size_kb": 0, 00:13:03.403 "state": "online", 00:13:03.403 "raid_level": "raid1", 00:13:03.403 "superblock": true, 00:13:03.403 "num_base_bdevs": 2, 00:13:03.403 "num_base_bdevs_discovered": 1, 00:13:03.403 "num_base_bdevs_operational": 1, 00:13:03.403 "base_bdevs_list": [ 00:13:03.403 { 00:13:03.403 "name": null, 00:13:03.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.403 "is_configured": false, 00:13:03.403 "data_offset": 0, 00:13:03.403 "data_size": 63488 00:13:03.403 }, 00:13:03.403 { 00:13:03.403 "name": "BaseBdev2", 00:13:03.403 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:03.403 "is_configured": true, 00:13:03.403 "data_offset": 2048, 00:13:03.403 "data_size": 63488 00:13:03.403 } 00:13:03.403 ] 00:13:03.403 }' 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.403 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 169.00 IOPS, 507.00 MiB/s [2024-11-20T17:05:27.789Z] 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.920 "name": "raid_bdev1", 00:13:03.920 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:03.920 "strip_size_kb": 0, 00:13:03.920 "state": "online", 00:13:03.920 "raid_level": "raid1", 00:13:03.920 "superblock": true, 00:13:03.920 "num_base_bdevs": 2, 00:13:03.920 "num_base_bdevs_discovered": 1, 00:13:03.920 "num_base_bdevs_operational": 1, 00:13:03.920 "base_bdevs_list": [ 00:13:03.920 { 00:13:03.920 "name": null, 00:13:03.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.920 "is_configured": false, 00:13:03.920 "data_offset": 0, 00:13:03.920 "data_size": 63488 00:13:03.920 }, 00:13:03.920 { 00:13:03.920 "name": "BaseBdev2", 00:13:03.920 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:03.920 "is_configured": true, 00:13:03.920 "data_offset": 2048, 00:13:03.920 "data_size": 63488 00:13:03.920 } 00:13:03.920 ] 00:13:03.920 }' 00:13:03.920 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.179 [2024-11-20 17:05:27.903920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.179 17:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:04.179 [2024-11-20 17:05:27.964806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:04.179 [2024-11-20 17:05:27.967285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.437 [2024-11-20 17:05:28.076410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.437 [2024-11-20 17:05:28.077099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.437 [2024-11-20 17:05:28.289402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.437 [2024-11-20 17:05:28.289748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.954 177.33 IOPS, 532.00 MiB/s [2024-11-20T17:05:28.823Z] [2024-11-20 17:05:28.617160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.954 [2024-11-20 17:05:28.617966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.213 17:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.213 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.213 "name": "raid_bdev1", 00:13:05.213 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:05.213 "strip_size_kb": 0, 00:13:05.213 "state": "online", 00:13:05.213 "raid_level": "raid1", 00:13:05.213 "superblock": true, 00:13:05.213 "num_base_bdevs": 2, 00:13:05.213 "num_base_bdevs_discovered": 2, 00:13:05.213 "num_base_bdevs_operational": 2, 00:13:05.213 "process": { 00:13:05.213 "type": "rebuild", 00:13:05.213 "target": "spare", 00:13:05.213 "progress": { 00:13:05.213 "blocks": 12288, 00:13:05.213 "percent": 19 00:13:05.213 } 00:13:05.213 }, 00:13:05.213 "base_bdevs_list": [ 00:13:05.213 { 00:13:05.213 "name": "spare", 00:13:05.213 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:05.213 "is_configured": true, 00:13:05.213 "data_offset": 2048, 00:13:05.213 "data_size": 63488 00:13:05.213 }, 00:13:05.213 { 00:13:05.213 "name": "BaseBdev2", 00:13:05.213 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:05.213 "is_configured": true, 00:13:05.213 "data_offset": 2048, 00:13:05.213 "data_size": 63488 00:13:05.213 } 00:13:05.213 ] 00:13:05.213 }' 00:13:05.213 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.213 [2024-11-20 17:05:29.039340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:05.213 [2024-11-20 17:05:29.040024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:05.213 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.213 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:05.471 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:13:05.471 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.472 [2024-11-20 17:05:29.149934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.472 "name": "raid_bdev1", 00:13:05.472 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:05.472 "strip_size_kb": 0, 00:13:05.472 "state": "online", 00:13:05.472 "raid_level": "raid1", 00:13:05.472 "superblock": true, 00:13:05.472 "num_base_bdevs": 2, 00:13:05.472 "num_base_bdevs_discovered": 2, 00:13:05.472 "num_base_bdevs_operational": 2, 00:13:05.472 "process": { 00:13:05.472 "type": "rebuild", 00:13:05.472 "target": "spare", 00:13:05.472 "progress": { 00:13:05.472 "blocks": 14336, 00:13:05.472 "percent": 22 00:13:05.472 } 00:13:05.472 }, 00:13:05.472 "base_bdevs_list": [ 00:13:05.472 { 00:13:05.472 "name": "spare", 00:13:05.472 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:05.472 "is_configured": true, 00:13:05.472 "data_offset": 2048, 00:13:05.472 "data_size": 63488 00:13:05.472 }, 00:13:05.472 { 00:13:05.472 "name": "BaseBdev2", 00:13:05.472 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:05.472 "is_configured": true, 00:13:05.472 "data_offset": 2048, 00:13:05.472 "data_size": 63488 00:13:05.472 } 00:13:05.472 ] 00:13:05.472 }' 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.472 17:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.989 161.00 IOPS, 483.00 MiB/s [2024-11-20T17:05:29.858Z] [2024-11-20 17:05:29.705048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:05.989 [2024-11-20 17:05:29.705511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:06.248 [2024-11-20 17:05:29.922374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.506 "name": "raid_bdev1", 00:13:06.506 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:06.506 "strip_size_kb": 0, 00:13:06.506 "state": "online", 00:13:06.506 "raid_level": "raid1", 00:13:06.506 "superblock": true, 00:13:06.506 "num_base_bdevs": 2, 00:13:06.506 "num_base_bdevs_discovered": 2, 00:13:06.506 "num_base_bdevs_operational": 2, 00:13:06.506 "process": { 00:13:06.506 "type": "rebuild", 00:13:06.506 "target": "spare", 00:13:06.506 "progress": { 00:13:06.506 "blocks": 32768, 00:13:06.506 "percent": 51 00:13:06.506 } 00:13:06.506 }, 00:13:06.506 "base_bdevs_list": [ 00:13:06.506 { 00:13:06.506 "name": "spare", 00:13:06.506 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:06.506 "is_configured": true, 00:13:06.506 "data_offset": 2048, 00:13:06.506 "data_size": 63488 00:13:06.506 }, 00:13:06.506 { 00:13:06.506 "name": "BaseBdev2", 00:13:06.506 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:06.506 "is_configured": true, 00:13:06.506 "data_offset": 2048, 00:13:06.506 "data_size": 63488 00:13:06.506 } 00:13:06.506 ] 00:13:06.506 }' 00:13:06.506 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.765 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.765 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.765 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.765 17:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.765 138.20 IOPS, 414.60 MiB/s [2024-11-20T17:05:30.634Z] [2024-11-20 17:05:30.597624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:06.765 [2024-11-20 17:05:30.598026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:07.023 [2024-11-20 17:05:30.801103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:07.281 [2024-11-20 17:05:31.126016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:07.541 [2024-11-20 17:05:31.266166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.800 120.67 IOPS, 362.00 MiB/s [2024-11-20T17:05:31.669Z] 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.800 "name": "raid_bdev1", 00:13:07.800 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:07.800 "strip_size_kb": 0, 00:13:07.800 "state": "online", 00:13:07.800 "raid_level": "raid1", 00:13:07.800 "superblock": true, 00:13:07.800 "num_base_bdevs": 2, 00:13:07.800 "num_base_bdevs_discovered": 2, 00:13:07.800 "num_base_bdevs_operational": 2, 00:13:07.800 "process": { 00:13:07.800 "type": "rebuild", 00:13:07.800 "target": "spare", 00:13:07.800 "progress": { 00:13:07.800 "blocks": 47104, 00:13:07.800 "percent": 74 00:13:07.800 } 00:13:07.800 }, 00:13:07.800 "base_bdevs_list": [ 00:13:07.800 { 00:13:07.800 "name": "spare", 00:13:07.800 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:07.800 "is_configured": true, 00:13:07.800 "data_offset": 2048, 00:13:07.800 "data_size": 63488 00:13:07.800 }, 00:13:07.800 { 00:13:07.800 "name": "BaseBdev2", 00:13:07.800 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:07.800 "is_configured": true, 00:13:07.800 "data_offset": 2048, 00:13:07.800 "data_size": 63488 00:13:07.800 } 00:13:07.800 ] 00:13:07.800 }' 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.800 [2024-11-20 17:05:31.589559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.800 17:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.058 [2024-11-20 17:05:31.708181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:08.625 [2024-11-20 17:05:32.376804] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.625 [2024-11-20 17:05:32.483894] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.625 [2024-11-20 17:05:32.487065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.884 108.86 IOPS, 326.57 MiB/s [2024-11-20T17:05:32.753Z] 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.884 "name": "raid_bdev1", 00:13:08.884 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:08.884 "strip_size_kb": 0, 00:13:08.884 "state": "online", 00:13:08.884 "raid_level": "raid1", 00:13:08.884 "superblock": true, 00:13:08.884 "num_base_bdevs": 2, 00:13:08.884 "num_base_bdevs_discovered": 2, 00:13:08.884 "num_base_bdevs_operational": 2, 00:13:08.884 "base_bdevs_list": [ 00:13:08.884 { 00:13:08.884 "name": "spare", 00:13:08.884 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:08.884 "is_configured": true, 00:13:08.884 "data_offset": 2048, 00:13:08.884 "data_size": 63488 00:13:08.884 }, 00:13:08.884 { 00:13:08.884 "name": "BaseBdev2", 00:13:08.884 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:08.884 "is_configured": true, 00:13:08.884 "data_offset": 2048, 00:13:08.884 "data_size": 63488 00:13:08.884 } 00:13:08.884 ] 00:13:08.884 }' 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.884 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.142 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:09.142 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:09.142 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.143 "name": "raid_bdev1", 00:13:09.143 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:09.143 "strip_size_kb": 0, 00:13:09.143 "state": "online", 00:13:09.143 "raid_level": "raid1", 00:13:09.143 "superblock": true, 00:13:09.143 "num_base_bdevs": 2, 00:13:09.143 "num_base_bdevs_discovered": 2, 00:13:09.143 "num_base_bdevs_operational": 2, 00:13:09.143 "base_bdevs_list": [ 00:13:09.143 { 00:13:09.143 "name": "spare", 00:13:09.143 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:09.143 "is_configured": true, 00:13:09.143 "data_offset": 2048, 00:13:09.143 "data_size": 63488 00:13:09.143 }, 00:13:09.143 { 00:13:09.143 "name": "BaseBdev2", 00:13:09.143 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:09.143 "is_configured": true, 00:13:09.143 "data_offset": 2048, 00:13:09.143 "data_size": 63488 00:13:09.143 } 00:13:09.143 ] 00:13:09.143 }' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.143 "name": "raid_bdev1", 00:13:09.143 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:09.143 "strip_size_kb": 0, 00:13:09.143 "state": "online", 00:13:09.143 "raid_level": "raid1", 00:13:09.143 "superblock": true, 00:13:09.143 "num_base_bdevs": 2, 00:13:09.143 "num_base_bdevs_discovered": 2, 00:13:09.143 "num_base_bdevs_operational": 2, 00:13:09.143 "base_bdevs_list": [ 00:13:09.143 { 00:13:09.143 "name": "spare", 00:13:09.143 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:09.143 "is_configured": true, 00:13:09.143 "data_offset": 2048, 00:13:09.143 "data_size": 63488 00:13:09.143 }, 00:13:09.143 { 00:13:09.143 "name": "BaseBdev2", 00:13:09.143 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:09.143 "is_configured": true, 00:13:09.143 "data_offset": 2048, 00:13:09.143 "data_size": 63488 00:13:09.143 } 00:13:09.143 ] 00:13:09.143 }' 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.143 17:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.710 [2024-11-20 17:05:33.466707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.710 [2024-11-20 17:05:33.466741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.710 00:13:09.710 Latency(us) 00:13:09.710 [2024-11-20T17:05:33.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.710 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:09.710 raid_bdev1 : 8.00 100.03 300.08 0.00 0.00 12978.29 256.93 116296.61 00:13:09.710 [2024-11-20T17:05:33.579Z] =================================================================================================================== 00:13:09.710 [2024-11-20T17:05:33.579Z] Total : 100.03 300.08 0.00 0.00 12978.29 256.93 116296.61 00:13:09.710 [2024-11-20 17:05:33.503870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.710 { 00:13:09.710 "results": [ 00:13:09.710 { 00:13:09.710 "job": "raid_bdev1", 00:13:09.710 "core_mask": "0x1", 00:13:09.710 "workload": "randrw", 00:13:09.710 "percentage": 50, 00:13:09.710 "status": "finished", 00:13:09.710 "queue_depth": 2, 00:13:09.710 "io_size": 3145728, 00:13:09.710 "runtime": 7.997928, 00:13:09.710 "iops": 100.02590670983784, 00:13:09.710 "mibps": 300.0777201295135, 00:13:09.710 "io_failed": 0, 00:13:09.710 "io_timeout": 0, 00:13:09.710 "avg_latency_us": 12978.2912, 00:13:09.710 "min_latency_us": 256.9309090909091, 00:13:09.710 "max_latency_us": 116296.61090909092 00:13:09.710 } 00:13:09.710 ], 00:13:09.710 "core_count": 1 00:13:09.710 } 00:13:09.710 [2024-11-20 17:05:33.504127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.710 [2024-11-20 17:05:33.504248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.710 [2024-11-20 17:05:33.504265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.710 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:10.279 /dev/nbd0 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.279 1+0 records in 00:13:10.279 1+0 records out 00:13:10.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625198 s, 6.6 MB/s 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.279 17:05:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:10.559 /dev/nbd1 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.559 1+0 records in 00:13:10.559 1+0 records out 00:13:10.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622137 s, 6.6 MB/s 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.559 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.817 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.075 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.333 [2024-11-20 17:05:34.996069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.333 [2024-11-20 17:05:34.996185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.333 [2024-11-20 17:05:34.996227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:11.333 [2024-11-20 17:05:34.996241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.333 [2024-11-20 17:05:34.999214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.333 [2024-11-20 17:05:34.999257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.333 [2024-11-20 17:05:34.999381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:11.333 [2024-11-20 17:05:34.999441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.333 [2024-11-20 17:05:34.999658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.333 spare 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.333 17:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.333 [2024-11-20 17:05:35.099774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:11.333 [2024-11-20 17:05:35.099818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.333 [2024-11-20 17:05:35.100174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:11.333 [2024-11-20 17:05:35.100358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:11.333 [2024-11-20 17:05:35.100375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:11.333 [2024-11-20 17:05:35.100570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.333 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.334 "name": "raid_bdev1", 00:13:11.334 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:11.334 "strip_size_kb": 0, 00:13:11.334 "state": "online", 00:13:11.334 "raid_level": "raid1", 00:13:11.334 "superblock": true, 00:13:11.334 "num_base_bdevs": 2, 00:13:11.334 "num_base_bdevs_discovered": 2, 00:13:11.334 "num_base_bdevs_operational": 2, 00:13:11.334 "base_bdevs_list": [ 00:13:11.334 { 00:13:11.334 "name": "spare", 00:13:11.334 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:11.334 "is_configured": true, 00:13:11.334 "data_offset": 2048, 00:13:11.334 "data_size": 63488 00:13:11.334 }, 00:13:11.334 { 00:13:11.334 "name": "BaseBdev2", 00:13:11.334 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:11.334 "is_configured": true, 00:13:11.334 "data_offset": 2048, 00:13:11.334 "data_size": 63488 00:13:11.334 } 00:13:11.334 ] 00:13:11.334 }' 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.334 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.900 "name": "raid_bdev1", 00:13:11.900 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:11.900 "strip_size_kb": 0, 00:13:11.900 "state": "online", 00:13:11.900 "raid_level": "raid1", 00:13:11.900 "superblock": true, 00:13:11.900 "num_base_bdevs": 2, 00:13:11.900 "num_base_bdevs_discovered": 2, 00:13:11.900 "num_base_bdevs_operational": 2, 00:13:11.900 "base_bdevs_list": [ 00:13:11.900 { 00:13:11.900 "name": "spare", 00:13:11.900 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:11.900 "is_configured": true, 00:13:11.900 "data_offset": 2048, 00:13:11.900 "data_size": 63488 00:13:11.900 }, 00:13:11.900 { 00:13:11.900 "name": "BaseBdev2", 00:13:11.900 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:11.900 "is_configured": true, 00:13:11.900 "data_offset": 2048, 00:13:11.900 "data_size": 63488 00:13:11.900 } 00:13:11.900 ] 00:13:11.900 }' 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.900 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.157 [2024-11-20 17:05:35.792930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.157 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.158 "name": "raid_bdev1", 00:13:12.158 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:12.158 "strip_size_kb": 0, 00:13:12.158 "state": "online", 00:13:12.158 "raid_level": "raid1", 00:13:12.158 "superblock": true, 00:13:12.158 "num_base_bdevs": 2, 00:13:12.158 "num_base_bdevs_discovered": 1, 00:13:12.158 "num_base_bdevs_operational": 1, 00:13:12.158 "base_bdevs_list": [ 00:13:12.158 { 00:13:12.158 "name": null, 00:13:12.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.158 "is_configured": false, 00:13:12.158 "data_offset": 0, 00:13:12.158 "data_size": 63488 00:13:12.158 }, 00:13:12.158 { 00:13:12.158 "name": "BaseBdev2", 00:13:12.158 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:12.158 "is_configured": true, 00:13:12.158 "data_offset": 2048, 00:13:12.158 "data_size": 63488 00:13:12.158 } 00:13:12.158 ] 00:13:12.158 }' 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.158 17:05:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.723 17:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.723 17:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.723 17:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.723 [2024-11-20 17:05:36.329161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.723 [2024-11-20 17:05:36.329399] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:12.723 [2024-11-20 17:05:36.329440] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:12.723 [2024-11-20 17:05:36.329507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.723 [2024-11-20 17:05:36.345649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:12.723 17:05:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.723 17:05:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:12.723 [2024-11-20 17:05:36.348121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.660 "name": "raid_bdev1", 00:13:13.660 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:13.660 "strip_size_kb": 0, 00:13:13.660 "state": "online", 00:13:13.660 "raid_level": "raid1", 00:13:13.660 "superblock": true, 00:13:13.660 "num_base_bdevs": 2, 00:13:13.660 "num_base_bdevs_discovered": 2, 00:13:13.660 "num_base_bdevs_operational": 2, 00:13:13.660 "process": { 00:13:13.660 "type": "rebuild", 00:13:13.660 "target": "spare", 00:13:13.660 "progress": { 00:13:13.660 "blocks": 20480, 00:13:13.660 "percent": 32 00:13:13.660 } 00:13:13.660 }, 00:13:13.660 "base_bdevs_list": [ 00:13:13.660 { 00:13:13.660 "name": "spare", 00:13:13.660 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:13.660 "is_configured": true, 00:13:13.660 "data_offset": 2048, 00:13:13.660 "data_size": 63488 00:13:13.660 }, 00:13:13.660 { 00:13:13.660 "name": "BaseBdev2", 00:13:13.660 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:13.660 "is_configured": true, 00:13:13.660 "data_offset": 2048, 00:13:13.660 "data_size": 63488 00:13:13.660 } 00:13:13.660 ] 00:13:13.660 }' 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.660 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.660 [2024-11-20 17:05:37.509608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.920 [2024-11-20 17:05:37.556537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.920 [2024-11-20 17:05:37.556845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.920 [2024-11-20 17:05:37.556875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.920 [2024-11-20 17:05:37.556891] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.920 "name": "raid_bdev1", 00:13:13.920 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:13.920 "strip_size_kb": 0, 00:13:13.920 "state": "online", 00:13:13.920 "raid_level": "raid1", 00:13:13.920 "superblock": true, 00:13:13.920 "num_base_bdevs": 2, 00:13:13.920 "num_base_bdevs_discovered": 1, 00:13:13.920 "num_base_bdevs_operational": 1, 00:13:13.920 "base_bdevs_list": [ 00:13:13.920 { 00:13:13.920 "name": null, 00:13:13.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.920 "is_configured": false, 00:13:13.920 "data_offset": 0, 00:13:13.920 "data_size": 63488 00:13:13.920 }, 00:13:13.920 { 00:13:13.920 "name": "BaseBdev2", 00:13:13.920 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:13.920 "is_configured": true, 00:13:13.920 "data_offset": 2048, 00:13:13.920 "data_size": 63488 00:13:13.920 } 00:13:13.920 ] 00:13:13.920 }' 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.920 17:05:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.487 17:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.487 17:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.487 17:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.487 [2024-11-20 17:05:38.111849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.487 [2024-11-20 17:05:38.112189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.487 [2024-11-20 17:05:38.112228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:14.487 [2024-11-20 17:05:38.112247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.487 [2024-11-20 17:05:38.112854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.487 [2024-11-20 17:05:38.112926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.487 [2024-11-20 17:05:38.113045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:14.487 [2024-11-20 17:05:38.113070] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:14.487 [2024-11-20 17:05:38.113084] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:14.487 [2024-11-20 17:05:38.113115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.487 [2024-11-20 17:05:38.129449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:14.487 spare 00:13:14.487 17:05:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.487 17:05:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:14.487 [2024-11-20 17:05:38.131977] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.421 "name": "raid_bdev1", 00:13:15.421 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:15.421 "strip_size_kb": 0, 00:13:15.421 "state": "online", 00:13:15.421 "raid_level": "raid1", 00:13:15.421 "superblock": true, 00:13:15.421 "num_base_bdevs": 2, 00:13:15.421 "num_base_bdevs_discovered": 2, 00:13:15.421 "num_base_bdevs_operational": 2, 00:13:15.421 "process": { 00:13:15.421 "type": "rebuild", 00:13:15.421 "target": "spare", 00:13:15.421 "progress": { 00:13:15.421 "blocks": 20480, 00:13:15.421 "percent": 32 00:13:15.421 } 00:13:15.421 }, 00:13:15.421 "base_bdevs_list": [ 00:13:15.421 { 00:13:15.421 "name": "spare", 00:13:15.421 "uuid": "c27d3d27-310e-5825-a1db-e33cd1c278f6", 00:13:15.421 "is_configured": true, 00:13:15.421 "data_offset": 2048, 00:13:15.421 "data_size": 63488 00:13:15.421 }, 00:13:15.421 { 00:13:15.421 "name": "BaseBdev2", 00:13:15.421 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:15.421 "is_configured": true, 00:13:15.421 "data_offset": 2048, 00:13:15.421 "data_size": 63488 00:13:15.421 } 00:13:15.421 ] 00:13:15.421 }' 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.421 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.680 [2024-11-20 17:05:39.293174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:15.680 [2024-11-20 17:05:39.339955] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:15.680 [2024-11-20 17:05:39.340220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.680 [2024-11-20 17:05:39.340255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:15.680 [2024-11-20 17:05:39.340268] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.680 "name": "raid_bdev1", 00:13:15.680 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:15.680 "strip_size_kb": 0, 00:13:15.680 "state": "online", 00:13:15.680 "raid_level": "raid1", 00:13:15.680 "superblock": true, 00:13:15.680 "num_base_bdevs": 2, 00:13:15.680 "num_base_bdevs_discovered": 1, 00:13:15.680 "num_base_bdevs_operational": 1, 00:13:15.680 "base_bdevs_list": [ 00:13:15.680 { 00:13:15.680 "name": null, 00:13:15.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.680 "is_configured": false, 00:13:15.680 "data_offset": 0, 00:13:15.680 "data_size": 63488 00:13:15.680 }, 00:13:15.680 { 00:13:15.680 "name": "BaseBdev2", 00:13:15.680 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:15.680 "is_configured": true, 00:13:15.680 "data_offset": 2048, 00:13:15.680 "data_size": 63488 00:13:15.680 } 00:13:15.680 ] 00:13:15.680 }' 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.680 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.248 "name": "raid_bdev1", 00:13:16.248 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:16.248 "strip_size_kb": 0, 00:13:16.248 "state": "online", 00:13:16.248 "raid_level": "raid1", 00:13:16.248 "superblock": true, 00:13:16.248 "num_base_bdevs": 2, 00:13:16.248 "num_base_bdevs_discovered": 1, 00:13:16.248 "num_base_bdevs_operational": 1, 00:13:16.248 "base_bdevs_list": [ 00:13:16.248 { 00:13:16.248 "name": null, 00:13:16.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.248 "is_configured": false, 00:13:16.248 "data_offset": 0, 00:13:16.248 "data_size": 63488 00:13:16.248 }, 00:13:16.248 { 00:13:16.248 "name": "BaseBdev2", 00:13:16.248 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:16.248 "is_configured": true, 00:13:16.248 "data_offset": 2048, 00:13:16.248 "data_size": 63488 00:13:16.248 } 00:13:16.248 ] 00:13:16.248 }' 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.248 17:05:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.248 [2024-11-20 17:05:40.059996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.248 [2024-11-20 17:05:40.060289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.248 [2024-11-20 17:05:40.060337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:16.248 [2024-11-20 17:05:40.060354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.248 [2024-11-20 17:05:40.060958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.248 [2024-11-20 17:05:40.060983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.248 [2024-11-20 17:05:40.061085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:16.248 [2024-11-20 17:05:40.061106] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.248 [2024-11-20 17:05:40.061134] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.248 [2024-11-20 17:05:40.061146] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:16.248 BaseBdev1 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.248 17:05:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:17.625 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.625 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.625 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.625 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.626 "name": "raid_bdev1", 00:13:17.626 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:17.626 "strip_size_kb": 0, 00:13:17.626 "state": "online", 00:13:17.626 "raid_level": "raid1", 00:13:17.626 "superblock": true, 00:13:17.626 "num_base_bdevs": 2, 00:13:17.626 "num_base_bdevs_discovered": 1, 00:13:17.626 "num_base_bdevs_operational": 1, 00:13:17.626 "base_bdevs_list": [ 00:13:17.626 { 00:13:17.626 "name": null, 00:13:17.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.626 "is_configured": false, 00:13:17.626 "data_offset": 0, 00:13:17.626 "data_size": 63488 00:13:17.626 }, 00:13:17.626 { 00:13:17.626 "name": "BaseBdev2", 00:13:17.626 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:17.626 "is_configured": true, 00:13:17.626 "data_offset": 2048, 00:13:17.626 "data_size": 63488 00:13:17.626 } 00:13:17.626 ] 00:13:17.626 }' 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.626 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.885 "name": "raid_bdev1", 00:13:17.885 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:17.885 "strip_size_kb": 0, 00:13:17.885 "state": "online", 00:13:17.885 "raid_level": "raid1", 00:13:17.885 "superblock": true, 00:13:17.885 "num_base_bdevs": 2, 00:13:17.885 "num_base_bdevs_discovered": 1, 00:13:17.885 "num_base_bdevs_operational": 1, 00:13:17.885 "base_bdevs_list": [ 00:13:17.885 { 00:13:17.885 "name": null, 00:13:17.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.885 "is_configured": false, 00:13:17.885 "data_offset": 0, 00:13:17.885 "data_size": 63488 00:13:17.885 }, 00:13:17.885 { 00:13:17.885 "name": "BaseBdev2", 00:13:17.885 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:17.885 "is_configured": true, 00:13:17.885 "data_offset": 2048, 00:13:17.885 "data_size": 63488 00:13:17.885 } 00:13:17.885 ] 00:13:17.885 }' 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.885 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 [2024-11-20 17:05:41.768961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.143 [2024-11-20 17:05:41.769176] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.143 [2024-11-20 17:05:41.769202] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.143 request: 00:13:18.143 { 00:13:18.143 "base_bdev": "BaseBdev1", 00:13:18.143 "raid_bdev": "raid_bdev1", 00:13:18.143 "method": "bdev_raid_add_base_bdev", 00:13:18.143 "req_id": 1 00:13:18.143 } 00:13:18.143 Got JSON-RPC error response 00:13:18.143 response: 00:13:18.143 { 00:13:18.144 "code": -22, 00:13:18.144 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:18.144 } 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.144 17:05:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.078 "name": "raid_bdev1", 00:13:19.078 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:19.078 "strip_size_kb": 0, 00:13:19.078 "state": "online", 00:13:19.078 "raid_level": "raid1", 00:13:19.078 "superblock": true, 00:13:19.078 "num_base_bdevs": 2, 00:13:19.078 "num_base_bdevs_discovered": 1, 00:13:19.078 "num_base_bdevs_operational": 1, 00:13:19.078 "base_bdevs_list": [ 00:13:19.078 { 00:13:19.078 "name": null, 00:13:19.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.078 "is_configured": false, 00:13:19.078 "data_offset": 0, 00:13:19.078 "data_size": 63488 00:13:19.078 }, 00:13:19.078 { 00:13:19.078 "name": "BaseBdev2", 00:13:19.078 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:19.078 "is_configured": true, 00:13:19.078 "data_offset": 2048, 00:13:19.078 "data_size": 63488 00:13:19.078 } 00:13:19.078 ] 00:13:19.078 }' 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.078 17:05:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.646 "name": "raid_bdev1", 00:13:19.646 "uuid": "0ca4782f-659c-4eb9-af31-a85a3f5e53c1", 00:13:19.646 "strip_size_kb": 0, 00:13:19.646 "state": "online", 00:13:19.646 "raid_level": "raid1", 00:13:19.646 "superblock": true, 00:13:19.646 "num_base_bdevs": 2, 00:13:19.646 "num_base_bdevs_discovered": 1, 00:13:19.646 "num_base_bdevs_operational": 1, 00:13:19.646 "base_bdevs_list": [ 00:13:19.646 { 00:13:19.646 "name": null, 00:13:19.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.646 "is_configured": false, 00:13:19.646 "data_offset": 0, 00:13:19.646 "data_size": 63488 00:13:19.646 }, 00:13:19.646 { 00:13:19.646 "name": "BaseBdev2", 00:13:19.646 "uuid": "0d6445b1-2038-509b-b4aa-3693e30a77c4", 00:13:19.646 "is_configured": true, 00:13:19.646 "data_offset": 2048, 00:13:19.646 "data_size": 63488 00:13:19.646 } 00:13:19.646 ] 00:13:19.646 }' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76886 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76886 ']' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76886 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76886 00:13:19.646 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.646 killing process with pid 76886 00:13:19.646 Received shutdown signal, test time was about 18.006853 seconds 00:13:19.646 00:13:19.646 Latency(us) 00:13:19.646 [2024-11-20T17:05:43.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.647 [2024-11-20T17:05:43.516Z] =================================================================================================================== 00:13:19.647 [2024-11-20T17:05:43.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.647 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.647 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76886' 00:13:19.647 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76886 00:13:19.647 [2024-11-20 17:05:43.495565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.647 17:05:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76886 00:13:19.647 [2024-11-20 17:05:43.495719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.647 [2024-11-20 17:05:43.495804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.647 [2024-11-20 17:05:43.495826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:19.905 [2024-11-20 17:05:43.686262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:21.280 00:13:21.280 real 0m21.285s 00:13:21.280 user 0m28.999s 00:13:21.280 sys 0m1.990s 00:13:21.280 ************************************ 00:13:21.280 END TEST raid_rebuild_test_sb_io 00:13:21.280 ************************************ 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.280 17:05:44 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:21.280 17:05:44 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:21.280 17:05:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:21.280 17:05:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.280 17:05:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.280 ************************************ 00:13:21.280 START TEST raid_rebuild_test 00:13:21.280 ************************************ 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:21.280 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77587 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77587 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77587 ']' 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.281 17:05:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.281 [2024-11-20 17:05:44.934018] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:13:21.281 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.281 Zero copy mechanism will not be used. 00:13:21.281 [2024-11-20 17:05:44.934460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77587 ] 00:13:21.281 [2024-11-20 17:05:45.118232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.539 [2024-11-20 17:05:45.246887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.798 [2024-11-20 17:05:45.440276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.798 [2024-11-20 17:05:45.440544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.056 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.315 BaseBdev1_malloc 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.315 [2024-11-20 17:05:45.953170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.315 [2024-11-20 17:05:45.953236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.315 [2024-11-20 17:05:45.953265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.315 [2024-11-20 17:05:45.953283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.315 [2024-11-20 17:05:45.956124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.315 [2024-11-20 17:05:45.956343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.315 BaseBdev1 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.315 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 BaseBdev2_malloc 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 [2024-11-20 17:05:46.006788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.316 [2024-11-20 17:05:46.006885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.316 [2024-11-20 17:05:46.006917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.316 [2024-11-20 17:05:46.006935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.316 [2024-11-20 17:05:46.009866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.316 [2024-11-20 17:05:46.009923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.316 BaseBdev2 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 BaseBdev3_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 [2024-11-20 17:05:46.074637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:22.316 [2024-11-20 17:05:46.074700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.316 [2024-11-20 17:05:46.074729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:22.316 [2024-11-20 17:05:46.074747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.316 [2024-11-20 17:05:46.077778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.316 [2024-11-20 17:05:46.077832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:22.316 BaseBdev3 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 BaseBdev4_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 [2024-11-20 17:05:46.128540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:22.316 [2024-11-20 17:05:46.128805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.316 [2024-11-20 17:05:46.128844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:22.316 [2024-11-20 17:05:46.128863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.316 [2024-11-20 17:05:46.131891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.316 [2024-11-20 17:05:46.131966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:22.316 BaseBdev4 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.316 spare_malloc 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.316 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.575 spare_delay 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.575 [2024-11-20 17:05:46.189973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.575 [2024-11-20 17:05:46.190052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.575 [2024-11-20 17:05:46.190092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:22.575 [2024-11-20 17:05:46.190125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.575 [2024-11-20 17:05:46.192969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.575 [2024-11-20 17:05:46.193019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.575 spare 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.575 [2024-11-20 17:05:46.198023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.575 [2024-11-20 17:05:46.200412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.575 [2024-11-20 17:05:46.200632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.575 [2024-11-20 17:05:46.200728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:22.575 [2024-11-20 17:05:46.200865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.575 [2024-11-20 17:05:46.200891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.575 [2024-11-20 17:05:46.201214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:22.575 [2024-11-20 17:05:46.201435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.575 [2024-11-20 17:05:46.201455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.575 [2024-11-20 17:05:46.201644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.575 "name": "raid_bdev1", 00:13:22.575 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:22.575 "strip_size_kb": 0, 00:13:22.575 "state": "online", 00:13:22.575 "raid_level": "raid1", 00:13:22.575 "superblock": false, 00:13:22.575 "num_base_bdevs": 4, 00:13:22.575 "num_base_bdevs_discovered": 4, 00:13:22.575 "num_base_bdevs_operational": 4, 00:13:22.575 "base_bdevs_list": [ 00:13:22.575 { 00:13:22.575 "name": "BaseBdev1", 00:13:22.575 "uuid": "e052bf75-db76-50ec-b911-97dd714ffe0d", 00:13:22.575 "is_configured": true, 00:13:22.575 "data_offset": 0, 00:13:22.575 "data_size": 65536 00:13:22.575 }, 00:13:22.575 { 00:13:22.575 "name": "BaseBdev2", 00:13:22.575 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:22.575 "is_configured": true, 00:13:22.575 "data_offset": 0, 00:13:22.575 "data_size": 65536 00:13:22.575 }, 00:13:22.575 { 00:13:22.575 "name": "BaseBdev3", 00:13:22.575 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:22.575 "is_configured": true, 00:13:22.575 "data_offset": 0, 00:13:22.575 "data_size": 65536 00:13:22.575 }, 00:13:22.575 { 00:13:22.575 "name": "BaseBdev4", 00:13:22.575 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:22.575 "is_configured": true, 00:13:22.575 "data_offset": 0, 00:13:22.575 "data_size": 65536 00:13:22.575 } 00:13:22.575 ] 00:13:22.575 }' 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.575 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.834 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.834 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.834 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.834 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.092 [2024-11-20 17:05:46.702628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.092 17:05:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:23.351 [2024-11-20 17:05:47.030346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:23.351 /dev/nbd0 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.351 1+0 records in 00:13:23.351 1+0 records out 00:13:23.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002748 s, 14.9 MB/s 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:23.351 17:05:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:31.490 65536+0 records in 00:13:31.490 65536+0 records out 00:13:31.490 33554432 bytes (34 MB, 32 MiB) copied, 7.91403 s, 4.2 MB/s 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.490 [2024-11-20 17:05:55.275285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.490 [2024-11-20 17:05:55.311352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.490 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.749 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.749 "name": "raid_bdev1", 00:13:31.749 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:31.749 "strip_size_kb": 0, 00:13:31.749 "state": "online", 00:13:31.749 "raid_level": "raid1", 00:13:31.749 "superblock": false, 00:13:31.749 "num_base_bdevs": 4, 00:13:31.749 "num_base_bdevs_discovered": 3, 00:13:31.749 "num_base_bdevs_operational": 3, 00:13:31.749 "base_bdevs_list": [ 00:13:31.749 { 00:13:31.749 "name": null, 00:13:31.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.749 "is_configured": false, 00:13:31.749 "data_offset": 0, 00:13:31.749 "data_size": 65536 00:13:31.749 }, 00:13:31.749 { 00:13:31.749 "name": "BaseBdev2", 00:13:31.749 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:31.749 "is_configured": true, 00:13:31.749 "data_offset": 0, 00:13:31.749 "data_size": 65536 00:13:31.749 }, 00:13:31.749 { 00:13:31.749 "name": "BaseBdev3", 00:13:31.749 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:31.749 "is_configured": true, 00:13:31.749 "data_offset": 0, 00:13:31.749 "data_size": 65536 00:13:31.749 }, 00:13:31.749 { 00:13:31.749 "name": "BaseBdev4", 00:13:31.749 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:31.749 "is_configured": true, 00:13:31.749 "data_offset": 0, 00:13:31.749 "data_size": 65536 00:13:31.749 } 00:13:31.749 ] 00:13:31.749 }' 00:13:31.749 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.749 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.008 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.008 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.008 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.008 [2024-11-20 17:05:55.831612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.008 [2024-11-20 17:05:55.845931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:32.008 17:05:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.008 17:05:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:32.008 [2024-11-20 17:05:55.848527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.387 "name": "raid_bdev1", 00:13:33.387 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:33.387 "strip_size_kb": 0, 00:13:33.387 "state": "online", 00:13:33.387 "raid_level": "raid1", 00:13:33.387 "superblock": false, 00:13:33.387 "num_base_bdevs": 4, 00:13:33.387 "num_base_bdevs_discovered": 4, 00:13:33.387 "num_base_bdevs_operational": 4, 00:13:33.387 "process": { 00:13:33.387 "type": "rebuild", 00:13:33.387 "target": "spare", 00:13:33.387 "progress": { 00:13:33.387 "blocks": 20480, 00:13:33.387 "percent": 31 00:13:33.387 } 00:13:33.387 }, 00:13:33.387 "base_bdevs_list": [ 00:13:33.387 { 00:13:33.387 "name": "spare", 00:13:33.387 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:33.387 "is_configured": true, 00:13:33.387 "data_offset": 0, 00:13:33.387 "data_size": 65536 00:13:33.387 }, 00:13:33.387 { 00:13:33.387 "name": "BaseBdev2", 00:13:33.387 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:33.387 "is_configured": true, 00:13:33.387 "data_offset": 0, 00:13:33.387 "data_size": 65536 00:13:33.387 }, 00:13:33.387 { 00:13:33.387 "name": "BaseBdev3", 00:13:33.387 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:33.387 "is_configured": true, 00:13:33.387 "data_offset": 0, 00:13:33.387 "data_size": 65536 00:13:33.387 }, 00:13:33.387 { 00:13:33.387 "name": "BaseBdev4", 00:13:33.387 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:33.387 "is_configured": true, 00:13:33.387 "data_offset": 0, 00:13:33.387 "data_size": 65536 00:13:33.387 } 00:13:33.387 ] 00:13:33.387 }' 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.387 17:05:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.387 [2024-11-20 17:05:57.021868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.387 [2024-11-20 17:05:57.056660] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.387 [2024-11-20 17:05:57.056926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.387 [2024-11-20 17:05:57.057183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.387 [2024-11-20 17:05:57.057243] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.387 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.388 "name": "raid_bdev1", 00:13:33.388 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:33.388 "strip_size_kb": 0, 00:13:33.388 "state": "online", 00:13:33.388 "raid_level": "raid1", 00:13:33.388 "superblock": false, 00:13:33.388 "num_base_bdevs": 4, 00:13:33.388 "num_base_bdevs_discovered": 3, 00:13:33.388 "num_base_bdevs_operational": 3, 00:13:33.388 "base_bdevs_list": [ 00:13:33.388 { 00:13:33.388 "name": null, 00:13:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.388 "is_configured": false, 00:13:33.388 "data_offset": 0, 00:13:33.388 "data_size": 65536 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "name": "BaseBdev2", 00:13:33.388 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:33.388 "is_configured": true, 00:13:33.388 "data_offset": 0, 00:13:33.388 "data_size": 65536 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "name": "BaseBdev3", 00:13:33.388 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:33.388 "is_configured": true, 00:13:33.388 "data_offset": 0, 00:13:33.388 "data_size": 65536 00:13:33.388 }, 00:13:33.388 { 00:13:33.388 "name": "BaseBdev4", 00:13:33.388 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:33.388 "is_configured": true, 00:13:33.388 "data_offset": 0, 00:13:33.388 "data_size": 65536 00:13:33.388 } 00:13:33.388 ] 00:13:33.388 }' 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.388 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.956 "name": "raid_bdev1", 00:13:33.956 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:33.956 "strip_size_kb": 0, 00:13:33.956 "state": "online", 00:13:33.956 "raid_level": "raid1", 00:13:33.956 "superblock": false, 00:13:33.956 "num_base_bdevs": 4, 00:13:33.956 "num_base_bdevs_discovered": 3, 00:13:33.956 "num_base_bdevs_operational": 3, 00:13:33.956 "base_bdevs_list": [ 00:13:33.956 { 00:13:33.956 "name": null, 00:13:33.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.956 "is_configured": false, 00:13:33.956 "data_offset": 0, 00:13:33.956 "data_size": 65536 00:13:33.956 }, 00:13:33.956 { 00:13:33.956 "name": "BaseBdev2", 00:13:33.956 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:33.956 "is_configured": true, 00:13:33.956 "data_offset": 0, 00:13:33.956 "data_size": 65536 00:13:33.956 }, 00:13:33.956 { 00:13:33.956 "name": "BaseBdev3", 00:13:33.956 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:33.956 "is_configured": true, 00:13:33.956 "data_offset": 0, 00:13:33.956 "data_size": 65536 00:13:33.956 }, 00:13:33.956 { 00:13:33.956 "name": "BaseBdev4", 00:13:33.956 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:33.956 "is_configured": true, 00:13:33.956 "data_offset": 0, 00:13:33.956 "data_size": 65536 00:13:33.956 } 00:13:33.956 ] 00:13:33.956 }' 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 [2024-11-20 17:05:57.776478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.956 [2024-11-20 17:05:57.790472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.956 17:05:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:33.956 [2024-11-20 17:05:57.793164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.349 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.350 "name": "raid_bdev1", 00:13:35.350 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:35.350 "strip_size_kb": 0, 00:13:35.350 "state": "online", 00:13:35.350 "raid_level": "raid1", 00:13:35.350 "superblock": false, 00:13:35.350 "num_base_bdevs": 4, 00:13:35.350 "num_base_bdevs_discovered": 4, 00:13:35.350 "num_base_bdevs_operational": 4, 00:13:35.350 "process": { 00:13:35.350 "type": "rebuild", 00:13:35.350 "target": "spare", 00:13:35.350 "progress": { 00:13:35.350 "blocks": 20480, 00:13:35.350 "percent": 31 00:13:35.350 } 00:13:35.350 }, 00:13:35.350 "base_bdevs_list": [ 00:13:35.350 { 00:13:35.350 "name": "spare", 00:13:35.350 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": "BaseBdev2", 00:13:35.350 "uuid": "62dd159f-bf02-504f-8f0f-268fc1b5b323", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": "BaseBdev3", 00:13:35.350 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": "BaseBdev4", 00:13:35.350 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 } 00:13:35.350 ] 00:13:35.350 }' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.350 17:05:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.350 [2024-11-20 17:05:58.966301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:35.350 [2024-11-20 17:05:59.001387] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.350 "name": "raid_bdev1", 00:13:35.350 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:35.350 "strip_size_kb": 0, 00:13:35.350 "state": "online", 00:13:35.350 "raid_level": "raid1", 00:13:35.350 "superblock": false, 00:13:35.350 "num_base_bdevs": 4, 00:13:35.350 "num_base_bdevs_discovered": 3, 00:13:35.350 "num_base_bdevs_operational": 3, 00:13:35.350 "process": { 00:13:35.350 "type": "rebuild", 00:13:35.350 "target": "spare", 00:13:35.350 "progress": { 00:13:35.350 "blocks": 24576, 00:13:35.350 "percent": 37 00:13:35.350 } 00:13:35.350 }, 00:13:35.350 "base_bdevs_list": [ 00:13:35.350 { 00:13:35.350 "name": "spare", 00:13:35.350 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": null, 00:13:35.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.350 "is_configured": false, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": "BaseBdev3", 00:13:35.350 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 }, 00:13:35.350 { 00:13:35.350 "name": "BaseBdev4", 00:13:35.350 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:35.350 "is_configured": true, 00:13:35.350 "data_offset": 0, 00:13:35.350 "data_size": 65536 00:13:35.350 } 00:13:35.350 ] 00:13:35.350 }' 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.350 17:05:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.624 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.624 "name": "raid_bdev1", 00:13:35.624 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:35.624 "strip_size_kb": 0, 00:13:35.625 "state": "online", 00:13:35.625 "raid_level": "raid1", 00:13:35.625 "superblock": false, 00:13:35.625 "num_base_bdevs": 4, 00:13:35.625 "num_base_bdevs_discovered": 3, 00:13:35.625 "num_base_bdevs_operational": 3, 00:13:35.625 "process": { 00:13:35.625 "type": "rebuild", 00:13:35.625 "target": "spare", 00:13:35.625 "progress": { 00:13:35.625 "blocks": 26624, 00:13:35.625 "percent": 40 00:13:35.625 } 00:13:35.625 }, 00:13:35.625 "base_bdevs_list": [ 00:13:35.625 { 00:13:35.625 "name": "spare", 00:13:35.625 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:35.625 "is_configured": true, 00:13:35.625 "data_offset": 0, 00:13:35.625 "data_size": 65536 00:13:35.625 }, 00:13:35.625 { 00:13:35.625 "name": null, 00:13:35.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.625 "is_configured": false, 00:13:35.625 "data_offset": 0, 00:13:35.625 "data_size": 65536 00:13:35.625 }, 00:13:35.625 { 00:13:35.625 "name": "BaseBdev3", 00:13:35.625 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:35.625 "is_configured": true, 00:13:35.625 "data_offset": 0, 00:13:35.625 "data_size": 65536 00:13:35.625 }, 00:13:35.625 { 00:13:35.625 "name": "BaseBdev4", 00:13:35.625 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:35.625 "is_configured": true, 00:13:35.625 "data_offset": 0, 00:13:35.625 "data_size": 65536 00:13:35.625 } 00:13:35.625 ] 00:13:35.625 }' 00:13:35.625 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.625 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.625 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.625 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.625 17:05:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.563 "name": "raid_bdev1", 00:13:36.563 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:36.563 "strip_size_kb": 0, 00:13:36.563 "state": "online", 00:13:36.563 "raid_level": "raid1", 00:13:36.563 "superblock": false, 00:13:36.563 "num_base_bdevs": 4, 00:13:36.563 "num_base_bdevs_discovered": 3, 00:13:36.563 "num_base_bdevs_operational": 3, 00:13:36.563 "process": { 00:13:36.563 "type": "rebuild", 00:13:36.563 "target": "spare", 00:13:36.563 "progress": { 00:13:36.563 "blocks": 51200, 00:13:36.563 "percent": 78 00:13:36.563 } 00:13:36.563 }, 00:13:36.563 "base_bdevs_list": [ 00:13:36.563 { 00:13:36.563 "name": "spare", 00:13:36.563 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:36.563 "is_configured": true, 00:13:36.563 "data_offset": 0, 00:13:36.563 "data_size": 65536 00:13:36.563 }, 00:13:36.563 { 00:13:36.563 "name": null, 00:13:36.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.563 "is_configured": false, 00:13:36.563 "data_offset": 0, 00:13:36.563 "data_size": 65536 00:13:36.563 }, 00:13:36.563 { 00:13:36.563 "name": "BaseBdev3", 00:13:36.563 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:36.563 "is_configured": true, 00:13:36.563 "data_offset": 0, 00:13:36.563 "data_size": 65536 00:13:36.563 }, 00:13:36.563 { 00:13:36.563 "name": "BaseBdev4", 00:13:36.563 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:36.563 "is_configured": true, 00:13:36.563 "data_offset": 0, 00:13:36.563 "data_size": 65536 00:13:36.563 } 00:13:36.563 ] 00:13:36.563 }' 00:13:36.563 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.822 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.822 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.822 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.822 17:06:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.390 [2024-11-20 17:06:01.015100] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:37.390 [2024-11-20 17:06:01.015172] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:37.390 [2024-11-20 17:06:01.015252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.648 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.907 "name": "raid_bdev1", 00:13:37.907 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:37.907 "strip_size_kb": 0, 00:13:37.907 "state": "online", 00:13:37.907 "raid_level": "raid1", 00:13:37.907 "superblock": false, 00:13:37.907 "num_base_bdevs": 4, 00:13:37.907 "num_base_bdevs_discovered": 3, 00:13:37.907 "num_base_bdevs_operational": 3, 00:13:37.907 "base_bdevs_list": [ 00:13:37.907 { 00:13:37.907 "name": "spare", 00:13:37.907 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": null, 00:13:37.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.907 "is_configured": false, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": "BaseBdev3", 00:13:37.907 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": "BaseBdev4", 00:13:37.907 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 } 00:13:37.907 ] 00:13:37.907 }' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.907 "name": "raid_bdev1", 00:13:37.907 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:37.907 "strip_size_kb": 0, 00:13:37.907 "state": "online", 00:13:37.907 "raid_level": "raid1", 00:13:37.907 "superblock": false, 00:13:37.907 "num_base_bdevs": 4, 00:13:37.907 "num_base_bdevs_discovered": 3, 00:13:37.907 "num_base_bdevs_operational": 3, 00:13:37.907 "base_bdevs_list": [ 00:13:37.907 { 00:13:37.907 "name": "spare", 00:13:37.907 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": null, 00:13:37.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.907 "is_configured": false, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": "BaseBdev3", 00:13:37.907 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 }, 00:13:37.907 { 00:13:37.907 "name": "BaseBdev4", 00:13:37.907 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:37.907 "is_configured": true, 00:13:37.907 "data_offset": 0, 00:13:37.907 "data_size": 65536 00:13:37.907 } 00:13:37.907 ] 00:13:37.907 }' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.907 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.165 "name": "raid_bdev1", 00:13:38.165 "uuid": "a6c60410-a7c7-454f-91e2-e542b02e7b28", 00:13:38.165 "strip_size_kb": 0, 00:13:38.165 "state": "online", 00:13:38.165 "raid_level": "raid1", 00:13:38.165 "superblock": false, 00:13:38.165 "num_base_bdevs": 4, 00:13:38.165 "num_base_bdevs_discovered": 3, 00:13:38.165 "num_base_bdevs_operational": 3, 00:13:38.165 "base_bdevs_list": [ 00:13:38.165 { 00:13:38.165 "name": "spare", 00:13:38.165 "uuid": "417ead02-07fc-59cf-ad10-37ff27765ce3", 00:13:38.165 "is_configured": true, 00:13:38.165 "data_offset": 0, 00:13:38.165 "data_size": 65536 00:13:38.165 }, 00:13:38.165 { 00:13:38.165 "name": null, 00:13:38.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.165 "is_configured": false, 00:13:38.165 "data_offset": 0, 00:13:38.165 "data_size": 65536 00:13:38.165 }, 00:13:38.165 { 00:13:38.165 "name": "BaseBdev3", 00:13:38.165 "uuid": "b9fc342c-670b-566d-9cf6-c84ee7a2b41e", 00:13:38.165 "is_configured": true, 00:13:38.165 "data_offset": 0, 00:13:38.165 "data_size": 65536 00:13:38.165 }, 00:13:38.165 { 00:13:38.165 "name": "BaseBdev4", 00:13:38.165 "uuid": "2f6f4caa-1846-5fb6-8e4c-af5cd4011d3a", 00:13:38.165 "is_configured": true, 00:13:38.165 "data_offset": 0, 00:13:38.165 "data_size": 65536 00:13:38.165 } 00:13:38.165 ] 00:13:38.165 }' 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.165 17:06:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.734 [2024-11-20 17:06:02.361134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.734 [2024-11-20 17:06:02.361355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.734 [2024-11-20 17:06:02.361563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.734 [2024-11-20 17:06:02.361804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.734 [2024-11-20 17:06:02.361952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.734 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:38.994 /dev/nbd0 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.994 1+0 records in 00:13:38.994 1+0 records out 00:13:38.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350453 s, 11.7 MB/s 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.994 17:06:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:39.253 /dev/nbd1 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.253 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.253 1+0 records in 00:13:39.253 1+0 records out 00:13:39.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345607 s, 11.9 MB/s 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:39.254 17:06:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.512 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.771 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77587 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77587 ']' 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77587 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77587 00:13:40.031 killing process with pid 77587 00:13:40.031 Received shutdown signal, test time was about 60.000000 seconds 00:13:40.031 00:13:40.031 Latency(us) 00:13:40.031 [2024-11-20T17:06:03.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.031 [2024-11-20T17:06:03.900Z] =================================================================================================================== 00:13:40.031 [2024-11-20T17:06:03.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77587' 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77587 00:13:40.031 [2024-11-20 17:06:03.838952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.031 17:06:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77587 00:13:40.599 [2024-11-20 17:06:04.248078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:41.535 00:13:41.535 real 0m20.415s 00:13:41.535 user 0m22.912s 00:13:41.535 sys 0m3.411s 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.535 ************************************ 00:13:41.535 END TEST raid_rebuild_test 00:13:41.535 ************************************ 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 17:06:05 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:41.535 17:06:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:41.535 17:06:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.535 17:06:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 ************************************ 00:13:41.535 START TEST raid_rebuild_test_sb 00:13:41.535 ************************************ 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78061 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78061 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78061 ']' 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.535 17:06:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 [2024-11-20 17:06:05.392244] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:13:41.535 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.535 Zero copy mechanism will not be used. 00:13:41.535 [2024-11-20 17:06:05.392653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78061 ] 00:13:41.794 [2024-11-20 17:06:05.564928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.053 [2024-11-20 17:06:05.685439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.053 [2024-11-20 17:06:05.891285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.053 [2024-11-20 17:06:05.891349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.620 BaseBdev1_malloc 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.620 [2024-11-20 17:06:06.444187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:42.620 [2024-11-20 17:06:06.444468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.620 [2024-11-20 17:06:06.444515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:42.620 [2024-11-20 17:06:06.444533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.620 [2024-11-20 17:06:06.447625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.620 [2024-11-20 17:06:06.447823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:42.620 BaseBdev1 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.620 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.888 BaseBdev2_malloc 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.889 [2024-11-20 17:06:06.495206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:42.889 [2024-11-20 17:06:06.495288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.889 [2024-11-20 17:06:06.495318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:42.889 [2024-11-20 17:06:06.495333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.889 [2024-11-20 17:06:06.498189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.889 [2024-11-20 17:06:06.498232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:42.889 BaseBdev2 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.889 BaseBdev3_malloc 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.889 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 [2024-11-20 17:06:06.555626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:42.890 [2024-11-20 17:06:06.555706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.890 [2024-11-20 17:06:06.555737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.890 [2024-11-20 17:06:06.555755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.890 [2024-11-20 17:06:06.558517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.890 [2024-11-20 17:06:06.558577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:42.890 BaseBdev3 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 BaseBdev4_malloc 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 [2024-11-20 17:06:06.608525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:42.890 [2024-11-20 17:06:06.608604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.890 [2024-11-20 17:06:06.608631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:42.890 [2024-11-20 17:06:06.608647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.890 [2024-11-20 17:06:06.611375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.890 [2024-11-20 17:06:06.611438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:42.890 BaseBdev4 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 spare_malloc 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 spare_delay 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 [2024-11-20 17:06:06.665648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.890 [2024-11-20 17:06:06.665724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.890 [2024-11-20 17:06:06.665749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:42.890 [2024-11-20 17:06:06.665766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.890 [2024-11-20 17:06:06.668531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.890 [2024-11-20 17:06:06.668753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.890 spare 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.890 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.890 [2024-11-20 17:06:06.677691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.890 [2024-11-20 17:06:06.680181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.890 [2024-11-20 17:06:06.680262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.890 [2024-11-20 17:06:06.680336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.890 [2024-11-20 17:06:06.680565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.890 [2024-11-20 17:06:06.680588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.890 [2024-11-20 17:06:06.680874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:42.890 [2024-11-20 17:06:06.681072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.890 [2024-11-20 17:06:06.681087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.891 [2024-11-20 17:06:06.681269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.891 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.891 "name": "raid_bdev1", 00:13:42.891 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:42.891 "strip_size_kb": 0, 00:13:42.891 "state": "online", 00:13:42.891 "raid_level": "raid1", 00:13:42.891 "superblock": true, 00:13:42.891 "num_base_bdevs": 4, 00:13:42.891 "num_base_bdevs_discovered": 4, 00:13:42.892 "num_base_bdevs_operational": 4, 00:13:42.892 "base_bdevs_list": [ 00:13:42.892 { 00:13:42.892 "name": "BaseBdev1", 00:13:42.892 "uuid": "67e34206-7a6a-52a7-896d-e7712fd21012", 00:13:42.892 "is_configured": true, 00:13:42.892 "data_offset": 2048, 00:13:42.892 "data_size": 63488 00:13:42.892 }, 00:13:42.892 { 00:13:42.892 "name": "BaseBdev2", 00:13:42.892 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:42.892 "is_configured": true, 00:13:42.892 "data_offset": 2048, 00:13:42.892 "data_size": 63488 00:13:42.892 }, 00:13:42.892 { 00:13:42.892 "name": "BaseBdev3", 00:13:42.892 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:42.892 "is_configured": true, 00:13:42.892 "data_offset": 2048, 00:13:42.892 "data_size": 63488 00:13:42.892 }, 00:13:42.892 { 00:13:42.892 "name": "BaseBdev4", 00:13:42.892 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:42.892 "is_configured": true, 00:13:42.892 "data_offset": 2048, 00:13:42.892 "data_size": 63488 00:13:42.892 } 00:13:42.892 ] 00:13:42.892 }' 00:13:42.892 17:06:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.892 17:06:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:43.463 [2024-11-20 17:06:07.210306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:43.463 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.464 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:44.032 [2024-11-20 17:06:07.589988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:44.032 /dev/nbd0 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.032 1+0 records in 00:13:44.032 1+0 records out 00:13:44.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420428 s, 9.7 MB/s 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:44.032 17:06:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:52.147 63488+0 records in 00:13:52.147 63488+0 records out 00:13:52.147 32505856 bytes (33 MB, 31 MiB) copied, 7.77464 s, 4.2 MB/s 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:52.147 [2024-11-20 17:06:15.711136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.147 [2024-11-20 17:06:15.743489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.147 "name": "raid_bdev1", 00:13:52.147 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:52.147 "strip_size_kb": 0, 00:13:52.147 "state": "online", 00:13:52.147 "raid_level": "raid1", 00:13:52.147 "superblock": true, 00:13:52.147 "num_base_bdevs": 4, 00:13:52.147 "num_base_bdevs_discovered": 3, 00:13:52.147 "num_base_bdevs_operational": 3, 00:13:52.147 "base_bdevs_list": [ 00:13:52.147 { 00:13:52.147 "name": null, 00:13:52.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.147 "is_configured": false, 00:13:52.147 "data_offset": 0, 00:13:52.147 "data_size": 63488 00:13:52.147 }, 00:13:52.147 { 00:13:52.147 "name": "BaseBdev2", 00:13:52.147 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:52.147 "is_configured": true, 00:13:52.147 "data_offset": 2048, 00:13:52.147 "data_size": 63488 00:13:52.147 }, 00:13:52.147 { 00:13:52.147 "name": "BaseBdev3", 00:13:52.147 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:52.147 "is_configured": true, 00:13:52.147 "data_offset": 2048, 00:13:52.147 "data_size": 63488 00:13:52.147 }, 00:13:52.147 { 00:13:52.147 "name": "BaseBdev4", 00:13:52.147 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:52.147 "is_configured": true, 00:13:52.147 "data_offset": 2048, 00:13:52.147 "data_size": 63488 00:13:52.147 } 00:13:52.147 ] 00:13:52.147 }' 00:13:52.147 17:06:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.148 17:06:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.406 17:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.406 17:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.406 17:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.406 [2024-11-20 17:06:16.231724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.406 [2024-11-20 17:06:16.246851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:52.406 17:06:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.406 17:06:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:52.406 [2024-11-20 17:06:16.249503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.781 "name": "raid_bdev1", 00:13:53.781 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:53.781 "strip_size_kb": 0, 00:13:53.781 "state": "online", 00:13:53.781 "raid_level": "raid1", 00:13:53.781 "superblock": true, 00:13:53.781 "num_base_bdevs": 4, 00:13:53.781 "num_base_bdevs_discovered": 4, 00:13:53.781 "num_base_bdevs_operational": 4, 00:13:53.781 "process": { 00:13:53.781 "type": "rebuild", 00:13:53.781 "target": "spare", 00:13:53.781 "progress": { 00:13:53.781 "blocks": 20480, 00:13:53.781 "percent": 32 00:13:53.781 } 00:13:53.781 }, 00:13:53.781 "base_bdevs_list": [ 00:13:53.781 { 00:13:53.781 "name": "spare", 00:13:53.781 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev2", 00:13:53.781 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev3", 00:13:53.781 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev4", 00:13:53.781 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 } 00:13:53.781 ] 00:13:53.781 }' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.781 [2024-11-20 17:06:17.419211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.781 [2024-11-20 17:06:17.458554] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.781 [2024-11-20 17:06:17.458844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.781 [2024-11-20 17:06:17.458982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.781 [2024-11-20 17:06:17.459039] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.781 "name": "raid_bdev1", 00:13:53.781 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:53.781 "strip_size_kb": 0, 00:13:53.781 "state": "online", 00:13:53.781 "raid_level": "raid1", 00:13:53.781 "superblock": true, 00:13:53.781 "num_base_bdevs": 4, 00:13:53.781 "num_base_bdevs_discovered": 3, 00:13:53.781 "num_base_bdevs_operational": 3, 00:13:53.781 "base_bdevs_list": [ 00:13:53.781 { 00:13:53.781 "name": null, 00:13:53.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.781 "is_configured": false, 00:13:53.781 "data_offset": 0, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev2", 00:13:53.781 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev3", 00:13:53.781 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 }, 00:13:53.781 { 00:13:53.781 "name": "BaseBdev4", 00:13:53.781 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:53.781 "is_configured": true, 00:13:53.781 "data_offset": 2048, 00:13:53.781 "data_size": 63488 00:13:53.781 } 00:13:53.781 ] 00:13:53.781 }' 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.781 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.347 17:06:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.347 "name": "raid_bdev1", 00:13:54.347 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:54.347 "strip_size_kb": 0, 00:13:54.347 "state": "online", 00:13:54.347 "raid_level": "raid1", 00:13:54.347 "superblock": true, 00:13:54.347 "num_base_bdevs": 4, 00:13:54.347 "num_base_bdevs_discovered": 3, 00:13:54.347 "num_base_bdevs_operational": 3, 00:13:54.347 "base_bdevs_list": [ 00:13:54.347 { 00:13:54.347 "name": null, 00:13:54.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.347 "is_configured": false, 00:13:54.347 "data_offset": 0, 00:13:54.347 "data_size": 63488 00:13:54.347 }, 00:13:54.347 { 00:13:54.347 "name": "BaseBdev2", 00:13:54.347 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:54.347 "is_configured": true, 00:13:54.347 "data_offset": 2048, 00:13:54.347 "data_size": 63488 00:13:54.347 }, 00:13:54.347 { 00:13:54.347 "name": "BaseBdev3", 00:13:54.347 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:54.347 "is_configured": true, 00:13:54.347 "data_offset": 2048, 00:13:54.347 "data_size": 63488 00:13:54.347 }, 00:13:54.347 { 00:13:54.347 "name": "BaseBdev4", 00:13:54.347 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:54.347 "is_configured": true, 00:13:54.347 "data_offset": 2048, 00:13:54.347 "data_size": 63488 00:13:54.347 } 00:13:54.347 ] 00:13:54.347 }' 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.347 [2024-11-20 17:06:18.154926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.347 [2024-11-20 17:06:18.168418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.347 17:06:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.347 [2024-11-20 17:06:18.171129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.722 "name": "raid_bdev1", 00:13:55.722 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:55.722 "strip_size_kb": 0, 00:13:55.722 "state": "online", 00:13:55.722 "raid_level": "raid1", 00:13:55.722 "superblock": true, 00:13:55.722 "num_base_bdevs": 4, 00:13:55.722 "num_base_bdevs_discovered": 4, 00:13:55.722 "num_base_bdevs_operational": 4, 00:13:55.722 "process": { 00:13:55.722 "type": "rebuild", 00:13:55.722 "target": "spare", 00:13:55.722 "progress": { 00:13:55.722 "blocks": 20480, 00:13:55.722 "percent": 32 00:13:55.722 } 00:13:55.722 }, 00:13:55.722 "base_bdevs_list": [ 00:13:55.722 { 00:13:55.722 "name": "spare", 00:13:55.722 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": "BaseBdev2", 00:13:55.722 "uuid": "657d4539-2b93-5fa9-ae87-9995edd669c7", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": "BaseBdev3", 00:13:55.722 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": "BaseBdev4", 00:13:55.722 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 } 00:13:55.722 ] 00:13:55.722 }' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:55.722 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 [2024-11-20 17:06:19.344745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.722 [2024-11-20 17:06:19.480132] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.722 "name": "raid_bdev1", 00:13:55.722 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:55.722 "strip_size_kb": 0, 00:13:55.722 "state": "online", 00:13:55.722 "raid_level": "raid1", 00:13:55.722 "superblock": true, 00:13:55.722 "num_base_bdevs": 4, 00:13:55.722 "num_base_bdevs_discovered": 3, 00:13:55.722 "num_base_bdevs_operational": 3, 00:13:55.722 "process": { 00:13:55.722 "type": "rebuild", 00:13:55.722 "target": "spare", 00:13:55.722 "progress": { 00:13:55.722 "blocks": 24576, 00:13:55.722 "percent": 38 00:13:55.722 } 00:13:55.722 }, 00:13:55.722 "base_bdevs_list": [ 00:13:55.722 { 00:13:55.722 "name": "spare", 00:13:55.722 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": null, 00:13:55.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.722 "is_configured": false, 00:13:55.722 "data_offset": 0, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": "BaseBdev3", 00:13:55.722 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 }, 00:13:55.722 { 00:13:55.722 "name": "BaseBdev4", 00:13:55.722 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:55.722 "is_configured": true, 00:13:55.722 "data_offset": 2048, 00:13:55.722 "data_size": 63488 00:13:55.722 } 00:13:55.722 ] 00:13:55.722 }' 00:13:55.722 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.981 "name": "raid_bdev1", 00:13:55.981 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:55.981 "strip_size_kb": 0, 00:13:55.981 "state": "online", 00:13:55.981 "raid_level": "raid1", 00:13:55.981 "superblock": true, 00:13:55.981 "num_base_bdevs": 4, 00:13:55.981 "num_base_bdevs_discovered": 3, 00:13:55.981 "num_base_bdevs_operational": 3, 00:13:55.981 "process": { 00:13:55.981 "type": "rebuild", 00:13:55.981 "target": "spare", 00:13:55.981 "progress": { 00:13:55.981 "blocks": 26624, 00:13:55.981 "percent": 41 00:13:55.981 } 00:13:55.981 }, 00:13:55.981 "base_bdevs_list": [ 00:13:55.981 { 00:13:55.981 "name": "spare", 00:13:55.981 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:55.981 "is_configured": true, 00:13:55.981 "data_offset": 2048, 00:13:55.981 "data_size": 63488 00:13:55.981 }, 00:13:55.981 { 00:13:55.981 "name": null, 00:13:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.981 "is_configured": false, 00:13:55.981 "data_offset": 0, 00:13:55.981 "data_size": 63488 00:13:55.981 }, 00:13:55.981 { 00:13:55.981 "name": "BaseBdev3", 00:13:55.981 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:55.981 "is_configured": true, 00:13:55.981 "data_offset": 2048, 00:13:55.981 "data_size": 63488 00:13:55.981 }, 00:13:55.981 { 00:13:55.981 "name": "BaseBdev4", 00:13:55.981 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:55.981 "is_configured": true, 00:13:55.981 "data_offset": 2048, 00:13:55.981 "data_size": 63488 00:13:55.981 } 00:13:55.981 ] 00:13:55.981 }' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.981 17:06:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.356 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.357 "name": "raid_bdev1", 00:13:57.357 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:57.357 "strip_size_kb": 0, 00:13:57.357 "state": "online", 00:13:57.357 "raid_level": "raid1", 00:13:57.357 "superblock": true, 00:13:57.357 "num_base_bdevs": 4, 00:13:57.357 "num_base_bdevs_discovered": 3, 00:13:57.357 "num_base_bdevs_operational": 3, 00:13:57.357 "process": { 00:13:57.357 "type": "rebuild", 00:13:57.357 "target": "spare", 00:13:57.357 "progress": { 00:13:57.357 "blocks": 51200, 00:13:57.357 "percent": 80 00:13:57.357 } 00:13:57.357 }, 00:13:57.357 "base_bdevs_list": [ 00:13:57.357 { 00:13:57.357 "name": "spare", 00:13:57.357 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:57.357 "is_configured": true, 00:13:57.357 "data_offset": 2048, 00:13:57.357 "data_size": 63488 00:13:57.357 }, 00:13:57.357 { 00:13:57.357 "name": null, 00:13:57.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.357 "is_configured": false, 00:13:57.357 "data_offset": 0, 00:13:57.357 "data_size": 63488 00:13:57.357 }, 00:13:57.357 { 00:13:57.357 "name": "BaseBdev3", 00:13:57.357 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:57.357 "is_configured": true, 00:13:57.357 "data_offset": 2048, 00:13:57.357 "data_size": 63488 00:13:57.357 }, 00:13:57.357 { 00:13:57.357 "name": "BaseBdev4", 00:13:57.357 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:57.357 "is_configured": true, 00:13:57.357 "data_offset": 2048, 00:13:57.357 "data_size": 63488 00:13:57.357 } 00:13:57.357 ] 00:13:57.357 }' 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.357 17:06:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.615 [2024-11-20 17:06:21.393478] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:57.615 [2024-11-20 17:06:21.393556] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:57.615 [2024-11-20 17:06:21.393717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.184 17:06:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.184 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.184 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.184 "name": "raid_bdev1", 00:13:58.184 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:58.184 "strip_size_kb": 0, 00:13:58.184 "state": "online", 00:13:58.184 "raid_level": "raid1", 00:13:58.184 "superblock": true, 00:13:58.184 "num_base_bdevs": 4, 00:13:58.184 "num_base_bdevs_discovered": 3, 00:13:58.184 "num_base_bdevs_operational": 3, 00:13:58.184 "base_bdevs_list": [ 00:13:58.184 { 00:13:58.184 "name": "spare", 00:13:58.184 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:58.184 "is_configured": true, 00:13:58.184 "data_offset": 2048, 00:13:58.184 "data_size": 63488 00:13:58.184 }, 00:13:58.184 { 00:13:58.184 "name": null, 00:13:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.184 "is_configured": false, 00:13:58.184 "data_offset": 0, 00:13:58.184 "data_size": 63488 00:13:58.184 }, 00:13:58.184 { 00:13:58.184 "name": "BaseBdev3", 00:13:58.184 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:58.184 "is_configured": true, 00:13:58.184 "data_offset": 2048, 00:13:58.184 "data_size": 63488 00:13:58.184 }, 00:13:58.184 { 00:13:58.184 "name": "BaseBdev4", 00:13:58.184 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:58.184 "is_configured": true, 00:13:58.184 "data_offset": 2048, 00:13:58.184 "data_size": 63488 00:13:58.184 } 00:13:58.184 ] 00:13:58.184 }' 00:13:58.184 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.453 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.453 "name": "raid_bdev1", 00:13:58.453 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:58.454 "strip_size_kb": 0, 00:13:58.454 "state": "online", 00:13:58.454 "raid_level": "raid1", 00:13:58.454 "superblock": true, 00:13:58.454 "num_base_bdevs": 4, 00:13:58.454 "num_base_bdevs_discovered": 3, 00:13:58.454 "num_base_bdevs_operational": 3, 00:13:58.454 "base_bdevs_list": [ 00:13:58.454 { 00:13:58.454 "name": "spare", 00:13:58.454 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:58.454 "is_configured": true, 00:13:58.454 "data_offset": 2048, 00:13:58.454 "data_size": 63488 00:13:58.454 }, 00:13:58.454 { 00:13:58.454 "name": null, 00:13:58.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.454 "is_configured": false, 00:13:58.454 "data_offset": 0, 00:13:58.454 "data_size": 63488 00:13:58.454 }, 00:13:58.454 { 00:13:58.454 "name": "BaseBdev3", 00:13:58.454 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:58.454 "is_configured": true, 00:13:58.454 "data_offset": 2048, 00:13:58.454 "data_size": 63488 00:13:58.454 }, 00:13:58.454 { 00:13:58.454 "name": "BaseBdev4", 00:13:58.454 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:58.454 "is_configured": true, 00:13:58.454 "data_offset": 2048, 00:13:58.454 "data_size": 63488 00:13:58.454 } 00:13:58.454 ] 00:13:58.454 }' 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.454 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.712 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.712 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.712 "name": "raid_bdev1", 00:13:58.712 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:13:58.712 "strip_size_kb": 0, 00:13:58.712 "state": "online", 00:13:58.712 "raid_level": "raid1", 00:13:58.712 "superblock": true, 00:13:58.712 "num_base_bdevs": 4, 00:13:58.712 "num_base_bdevs_discovered": 3, 00:13:58.712 "num_base_bdevs_operational": 3, 00:13:58.712 "base_bdevs_list": [ 00:13:58.712 { 00:13:58.712 "name": "spare", 00:13:58.712 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:13:58.712 "is_configured": true, 00:13:58.712 "data_offset": 2048, 00:13:58.712 "data_size": 63488 00:13:58.712 }, 00:13:58.712 { 00:13:58.712 "name": null, 00:13:58.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.712 "is_configured": false, 00:13:58.712 "data_offset": 0, 00:13:58.712 "data_size": 63488 00:13:58.712 }, 00:13:58.712 { 00:13:58.712 "name": "BaseBdev3", 00:13:58.712 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:13:58.712 "is_configured": true, 00:13:58.712 "data_offset": 2048, 00:13:58.712 "data_size": 63488 00:13:58.712 }, 00:13:58.712 { 00:13:58.712 "name": "BaseBdev4", 00:13:58.712 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:13:58.712 "is_configured": true, 00:13:58.712 "data_offset": 2048, 00:13:58.712 "data_size": 63488 00:13:58.712 } 00:13:58.712 ] 00:13:58.712 }' 00:13:58.712 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.712 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.970 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.971 [2024-11-20 17:06:22.796783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.971 [2024-11-20 17:06:22.796993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.971 [2024-11-20 17:06:22.797124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.971 [2024-11-20 17:06:22.797238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.971 [2024-11-20 17:06:22.797254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.971 17:06:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:59.229 17:06:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:59.487 /dev/nbd0 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.487 1+0 records in 00:13:59.487 1+0 records out 00:13:59.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360099 s, 11.4 MB/s 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:59.487 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:59.745 /dev/nbd1 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.745 1+0 records in 00:13:59.745 1+0 records out 00:13:59.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416039 s, 9.8 MB/s 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:59.745 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.003 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.261 17:06:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.520 [2024-11-20 17:06:24.258316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.520 [2024-11-20 17:06:24.258398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.520 [2024-11-20 17:06:24.258433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:00.520 [2024-11-20 17:06:24.258448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.520 [2024-11-20 17:06:24.261406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.520 [2024-11-20 17:06:24.261463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.520 [2024-11-20 17:06:24.261596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:00.520 [2024-11-20 17:06:24.261662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.520 [2024-11-20 17:06:24.261876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.520 [2024-11-20 17:06:24.262010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.520 spare 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.520 [2024-11-20 17:06:24.362153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:00.520 [2024-11-20 17:06:24.362224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.520 [2024-11-20 17:06:24.362699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:00.520 [2024-11-20 17:06:24.362989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:00.520 [2024-11-20 17:06:24.363021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:00.520 [2024-11-20 17:06:24.363260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.520 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.779 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.779 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.779 "name": "raid_bdev1", 00:14:00.779 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:00.779 "strip_size_kb": 0, 00:14:00.779 "state": "online", 00:14:00.779 "raid_level": "raid1", 00:14:00.779 "superblock": true, 00:14:00.779 "num_base_bdevs": 4, 00:14:00.779 "num_base_bdevs_discovered": 3, 00:14:00.779 "num_base_bdevs_operational": 3, 00:14:00.779 "base_bdevs_list": [ 00:14:00.779 { 00:14:00.779 "name": "spare", 00:14:00.779 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:14:00.779 "is_configured": true, 00:14:00.779 "data_offset": 2048, 00:14:00.779 "data_size": 63488 00:14:00.779 }, 00:14:00.779 { 00:14:00.779 "name": null, 00:14:00.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.779 "is_configured": false, 00:14:00.779 "data_offset": 2048, 00:14:00.779 "data_size": 63488 00:14:00.779 }, 00:14:00.779 { 00:14:00.779 "name": "BaseBdev3", 00:14:00.779 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:00.779 "is_configured": true, 00:14:00.779 "data_offset": 2048, 00:14:00.779 "data_size": 63488 00:14:00.779 }, 00:14:00.779 { 00:14:00.779 "name": "BaseBdev4", 00:14:00.779 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:00.779 "is_configured": true, 00:14:00.779 "data_offset": 2048, 00:14:00.779 "data_size": 63488 00:14:00.779 } 00:14:00.779 ] 00:14:00.779 }' 00:14:00.779 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.779 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.037 "name": "raid_bdev1", 00:14:01.037 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:01.037 "strip_size_kb": 0, 00:14:01.037 "state": "online", 00:14:01.037 "raid_level": "raid1", 00:14:01.037 "superblock": true, 00:14:01.037 "num_base_bdevs": 4, 00:14:01.037 "num_base_bdevs_discovered": 3, 00:14:01.037 "num_base_bdevs_operational": 3, 00:14:01.037 "base_bdevs_list": [ 00:14:01.037 { 00:14:01.037 "name": "spare", 00:14:01.037 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:14:01.037 "is_configured": true, 00:14:01.037 "data_offset": 2048, 00:14:01.037 "data_size": 63488 00:14:01.037 }, 00:14:01.037 { 00:14:01.037 "name": null, 00:14:01.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.037 "is_configured": false, 00:14:01.037 "data_offset": 2048, 00:14:01.037 "data_size": 63488 00:14:01.037 }, 00:14:01.037 { 00:14:01.037 "name": "BaseBdev3", 00:14:01.037 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:01.037 "is_configured": true, 00:14:01.037 "data_offset": 2048, 00:14:01.037 "data_size": 63488 00:14:01.037 }, 00:14:01.037 { 00:14:01.037 "name": "BaseBdev4", 00:14:01.037 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:01.037 "is_configured": true, 00:14:01.037 "data_offset": 2048, 00:14:01.037 "data_size": 63488 00:14:01.037 } 00:14:01.037 ] 00:14:01.037 }' 00:14:01.037 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.296 17:06:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.296 [2024-11-20 17:06:25.015451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.296 "name": "raid_bdev1", 00:14:01.296 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:01.296 "strip_size_kb": 0, 00:14:01.296 "state": "online", 00:14:01.296 "raid_level": "raid1", 00:14:01.296 "superblock": true, 00:14:01.296 "num_base_bdevs": 4, 00:14:01.296 "num_base_bdevs_discovered": 2, 00:14:01.296 "num_base_bdevs_operational": 2, 00:14:01.296 "base_bdevs_list": [ 00:14:01.296 { 00:14:01.296 "name": null, 00:14:01.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.296 "is_configured": false, 00:14:01.296 "data_offset": 0, 00:14:01.296 "data_size": 63488 00:14:01.296 }, 00:14:01.296 { 00:14:01.296 "name": null, 00:14:01.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.296 "is_configured": false, 00:14:01.296 "data_offset": 2048, 00:14:01.296 "data_size": 63488 00:14:01.296 }, 00:14:01.296 { 00:14:01.296 "name": "BaseBdev3", 00:14:01.296 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:01.296 "is_configured": true, 00:14:01.296 "data_offset": 2048, 00:14:01.296 "data_size": 63488 00:14:01.296 }, 00:14:01.296 { 00:14:01.296 "name": "BaseBdev4", 00:14:01.296 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:01.296 "is_configured": true, 00:14:01.296 "data_offset": 2048, 00:14:01.296 "data_size": 63488 00:14:01.296 } 00:14:01.296 ] 00:14:01.296 }' 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.296 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.864 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.864 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.864 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.864 [2024-11-20 17:06:25.519634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.864 [2024-11-20 17:06:25.519897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:01.864 [2024-11-20 17:06:25.519935] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.864 [2024-11-20 17:06:25.519983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.864 [2024-11-20 17:06:25.533419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:01.864 17:06:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.864 17:06:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:01.864 [2024-11-20 17:06:25.536046] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.800 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.800 "name": "raid_bdev1", 00:14:02.801 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:02.801 "strip_size_kb": 0, 00:14:02.801 "state": "online", 00:14:02.801 "raid_level": "raid1", 00:14:02.801 "superblock": true, 00:14:02.801 "num_base_bdevs": 4, 00:14:02.801 "num_base_bdevs_discovered": 3, 00:14:02.801 "num_base_bdevs_operational": 3, 00:14:02.801 "process": { 00:14:02.801 "type": "rebuild", 00:14:02.801 "target": "spare", 00:14:02.801 "progress": { 00:14:02.801 "blocks": 20480, 00:14:02.801 "percent": 32 00:14:02.801 } 00:14:02.801 }, 00:14:02.801 "base_bdevs_list": [ 00:14:02.801 { 00:14:02.801 "name": "spare", 00:14:02.801 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:14:02.801 "is_configured": true, 00:14:02.801 "data_offset": 2048, 00:14:02.801 "data_size": 63488 00:14:02.801 }, 00:14:02.801 { 00:14:02.801 "name": null, 00:14:02.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.801 "is_configured": false, 00:14:02.801 "data_offset": 2048, 00:14:02.801 "data_size": 63488 00:14:02.801 }, 00:14:02.801 { 00:14:02.801 "name": "BaseBdev3", 00:14:02.801 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:02.801 "is_configured": true, 00:14:02.801 "data_offset": 2048, 00:14:02.801 "data_size": 63488 00:14:02.801 }, 00:14:02.801 { 00:14:02.801 "name": "BaseBdev4", 00:14:02.801 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:02.801 "is_configured": true, 00:14:02.801 "data_offset": 2048, 00:14:02.801 "data_size": 63488 00:14:02.801 } 00:14:02.801 ] 00:14:02.801 }' 00:14:02.801 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.801 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.801 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.060 [2024-11-20 17:06:26.697811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.060 [2024-11-20 17:06:26.744881] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.060 [2024-11-20 17:06:26.744973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.060 [2024-11-20 17:06:26.744998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.060 [2024-11-20 17:06:26.745008] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.060 "name": "raid_bdev1", 00:14:03.060 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:03.060 "strip_size_kb": 0, 00:14:03.060 "state": "online", 00:14:03.060 "raid_level": "raid1", 00:14:03.060 "superblock": true, 00:14:03.060 "num_base_bdevs": 4, 00:14:03.060 "num_base_bdevs_discovered": 2, 00:14:03.060 "num_base_bdevs_operational": 2, 00:14:03.060 "base_bdevs_list": [ 00:14:03.060 { 00:14:03.060 "name": null, 00:14:03.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.060 "is_configured": false, 00:14:03.060 "data_offset": 0, 00:14:03.060 "data_size": 63488 00:14:03.060 }, 00:14:03.060 { 00:14:03.060 "name": null, 00:14:03.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.060 "is_configured": false, 00:14:03.060 "data_offset": 2048, 00:14:03.060 "data_size": 63488 00:14:03.060 }, 00:14:03.060 { 00:14:03.060 "name": "BaseBdev3", 00:14:03.060 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:03.060 "is_configured": true, 00:14:03.060 "data_offset": 2048, 00:14:03.060 "data_size": 63488 00:14:03.060 }, 00:14:03.060 { 00:14:03.060 "name": "BaseBdev4", 00:14:03.060 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:03.060 "is_configured": true, 00:14:03.060 "data_offset": 2048, 00:14:03.060 "data_size": 63488 00:14:03.060 } 00:14:03.060 ] 00:14:03.060 }' 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.060 17:06:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.627 17:06:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:03.627 17:06:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.627 17:06:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.627 [2024-11-20 17:06:27.275028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:03.627 [2024-11-20 17:06:27.275110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.627 [2024-11-20 17:06:27.275150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:03.627 [2024-11-20 17:06:27.275166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.627 [2024-11-20 17:06:27.275747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.627 [2024-11-20 17:06:27.275801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:03.627 [2024-11-20 17:06:27.275917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:03.627 [2024-11-20 17:06:27.275936] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.627 [2024-11-20 17:06:27.275954] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.627 [2024-11-20 17:06:27.275985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.627 [2024-11-20 17:06:27.289818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:03.627 spare 00:14:03.627 17:06:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.627 17:06:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:03.627 [2024-11-20 17:06:27.292329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.564 "name": "raid_bdev1", 00:14:04.564 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:04.564 "strip_size_kb": 0, 00:14:04.564 "state": "online", 00:14:04.564 "raid_level": "raid1", 00:14:04.564 "superblock": true, 00:14:04.564 "num_base_bdevs": 4, 00:14:04.564 "num_base_bdevs_discovered": 3, 00:14:04.564 "num_base_bdevs_operational": 3, 00:14:04.564 "process": { 00:14:04.564 "type": "rebuild", 00:14:04.564 "target": "spare", 00:14:04.564 "progress": { 00:14:04.564 "blocks": 20480, 00:14:04.564 "percent": 32 00:14:04.564 } 00:14:04.564 }, 00:14:04.564 "base_bdevs_list": [ 00:14:04.564 { 00:14:04.564 "name": "spare", 00:14:04.564 "uuid": "8dccf79d-a003-595e-91a3-5c08bf36cc62", 00:14:04.564 "is_configured": true, 00:14:04.564 "data_offset": 2048, 00:14:04.564 "data_size": 63488 00:14:04.564 }, 00:14:04.564 { 00:14:04.564 "name": null, 00:14:04.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.564 "is_configured": false, 00:14:04.564 "data_offset": 2048, 00:14:04.564 "data_size": 63488 00:14:04.564 }, 00:14:04.564 { 00:14:04.564 "name": "BaseBdev3", 00:14:04.564 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:04.564 "is_configured": true, 00:14:04.564 "data_offset": 2048, 00:14:04.564 "data_size": 63488 00:14:04.564 }, 00:14:04.564 { 00:14:04.564 "name": "BaseBdev4", 00:14:04.564 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:04.564 "is_configured": true, 00:14:04.564 "data_offset": 2048, 00:14:04.564 "data_size": 63488 00:14:04.564 } 00:14:04.564 ] 00:14:04.564 }' 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.564 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.824 [2024-11-20 17:06:28.463208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.824 [2024-11-20 17:06:28.501181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.824 [2024-11-20 17:06:28.501273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.824 [2024-11-20 17:06:28.501297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.824 [2024-11-20 17:06:28.501311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.824 "name": "raid_bdev1", 00:14:04.824 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:04.824 "strip_size_kb": 0, 00:14:04.824 "state": "online", 00:14:04.824 "raid_level": "raid1", 00:14:04.824 "superblock": true, 00:14:04.824 "num_base_bdevs": 4, 00:14:04.824 "num_base_bdevs_discovered": 2, 00:14:04.824 "num_base_bdevs_operational": 2, 00:14:04.824 "base_bdevs_list": [ 00:14:04.824 { 00:14:04.824 "name": null, 00:14:04.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.824 "is_configured": false, 00:14:04.824 "data_offset": 0, 00:14:04.824 "data_size": 63488 00:14:04.824 }, 00:14:04.824 { 00:14:04.824 "name": null, 00:14:04.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.824 "is_configured": false, 00:14:04.824 "data_offset": 2048, 00:14:04.824 "data_size": 63488 00:14:04.824 }, 00:14:04.824 { 00:14:04.824 "name": "BaseBdev3", 00:14:04.824 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:04.824 "is_configured": true, 00:14:04.824 "data_offset": 2048, 00:14:04.824 "data_size": 63488 00:14:04.824 }, 00:14:04.824 { 00:14:04.824 "name": "BaseBdev4", 00:14:04.824 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:04.824 "is_configured": true, 00:14:04.824 "data_offset": 2048, 00:14:04.824 "data_size": 63488 00:14:04.824 } 00:14:04.824 ] 00:14:04.824 }' 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.824 17:06:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.393 "name": "raid_bdev1", 00:14:05.393 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:05.393 "strip_size_kb": 0, 00:14:05.393 "state": "online", 00:14:05.393 "raid_level": "raid1", 00:14:05.393 "superblock": true, 00:14:05.393 "num_base_bdevs": 4, 00:14:05.393 "num_base_bdevs_discovered": 2, 00:14:05.393 "num_base_bdevs_operational": 2, 00:14:05.393 "base_bdevs_list": [ 00:14:05.393 { 00:14:05.393 "name": null, 00:14:05.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.393 "is_configured": false, 00:14:05.393 "data_offset": 0, 00:14:05.393 "data_size": 63488 00:14:05.393 }, 00:14:05.393 { 00:14:05.393 "name": null, 00:14:05.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.393 "is_configured": false, 00:14:05.393 "data_offset": 2048, 00:14:05.393 "data_size": 63488 00:14:05.393 }, 00:14:05.393 { 00:14:05.393 "name": "BaseBdev3", 00:14:05.393 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:05.393 "is_configured": true, 00:14:05.393 "data_offset": 2048, 00:14:05.393 "data_size": 63488 00:14:05.393 }, 00:14:05.393 { 00:14:05.393 "name": "BaseBdev4", 00:14:05.393 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:05.393 "is_configured": true, 00:14:05.393 "data_offset": 2048, 00:14:05.393 "data_size": 63488 00:14:05.393 } 00:14:05.393 ] 00:14:05.393 }' 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 [2024-11-20 17:06:29.211619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.393 [2024-11-20 17:06:29.211684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.393 [2024-11-20 17:06:29.211712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:05.393 [2024-11-20 17:06:29.211729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.393 [2024-11-20 17:06:29.212299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.393 [2024-11-20 17:06:29.212348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.393 [2024-11-20 17:06:29.212442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:05.393 [2024-11-20 17:06:29.212467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:05.393 [2024-11-20 17:06:29.212479] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:05.393 [2024-11-20 17:06:29.212507] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:05.393 BaseBdev1 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.393 17:06:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.768 "name": "raid_bdev1", 00:14:06.768 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:06.768 "strip_size_kb": 0, 00:14:06.768 "state": "online", 00:14:06.768 "raid_level": "raid1", 00:14:06.768 "superblock": true, 00:14:06.768 "num_base_bdevs": 4, 00:14:06.768 "num_base_bdevs_discovered": 2, 00:14:06.768 "num_base_bdevs_operational": 2, 00:14:06.768 "base_bdevs_list": [ 00:14:06.768 { 00:14:06.768 "name": null, 00:14:06.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.768 "is_configured": false, 00:14:06.768 "data_offset": 0, 00:14:06.768 "data_size": 63488 00:14:06.768 }, 00:14:06.768 { 00:14:06.768 "name": null, 00:14:06.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.768 "is_configured": false, 00:14:06.768 "data_offset": 2048, 00:14:06.768 "data_size": 63488 00:14:06.768 }, 00:14:06.768 { 00:14:06.768 "name": "BaseBdev3", 00:14:06.768 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:06.768 "is_configured": true, 00:14:06.768 "data_offset": 2048, 00:14:06.768 "data_size": 63488 00:14:06.768 }, 00:14:06.768 { 00:14:06.768 "name": "BaseBdev4", 00:14:06.768 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:06.768 "is_configured": true, 00:14:06.768 "data_offset": 2048, 00:14:06.768 "data_size": 63488 00:14:06.768 } 00:14:06.768 ] 00:14:06.768 }' 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.768 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.027 "name": "raid_bdev1", 00:14:07.027 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:07.027 "strip_size_kb": 0, 00:14:07.027 "state": "online", 00:14:07.027 "raid_level": "raid1", 00:14:07.027 "superblock": true, 00:14:07.027 "num_base_bdevs": 4, 00:14:07.027 "num_base_bdevs_discovered": 2, 00:14:07.027 "num_base_bdevs_operational": 2, 00:14:07.027 "base_bdevs_list": [ 00:14:07.027 { 00:14:07.027 "name": null, 00:14:07.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.027 "is_configured": false, 00:14:07.027 "data_offset": 0, 00:14:07.027 "data_size": 63488 00:14:07.027 }, 00:14:07.027 { 00:14:07.027 "name": null, 00:14:07.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.027 "is_configured": false, 00:14:07.027 "data_offset": 2048, 00:14:07.027 "data_size": 63488 00:14:07.027 }, 00:14:07.027 { 00:14:07.027 "name": "BaseBdev3", 00:14:07.027 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:07.027 "is_configured": true, 00:14:07.027 "data_offset": 2048, 00:14:07.027 "data_size": 63488 00:14:07.027 }, 00:14:07.027 { 00:14:07.027 "name": "BaseBdev4", 00:14:07.027 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:07.027 "is_configured": true, 00:14:07.027 "data_offset": 2048, 00:14:07.027 "data_size": 63488 00:14:07.027 } 00:14:07.027 ] 00:14:07.027 }' 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.027 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.286 [2024-11-20 17:06:30.908157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.286 [2024-11-20 17:06:30.908387] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:07.286 [2024-11-20 17:06:30.908419] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.286 request: 00:14:07.286 { 00:14:07.286 "base_bdev": "BaseBdev1", 00:14:07.286 "raid_bdev": "raid_bdev1", 00:14:07.286 "method": "bdev_raid_add_base_bdev", 00:14:07.286 "req_id": 1 00:14:07.286 } 00:14:07.286 Got JSON-RPC error response 00:14:07.286 response: 00:14:07.286 { 00:14:07.286 "code": -22, 00:14:07.286 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:07.286 } 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.286 17:06:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.223 "name": "raid_bdev1", 00:14:08.223 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:08.223 "strip_size_kb": 0, 00:14:08.223 "state": "online", 00:14:08.223 "raid_level": "raid1", 00:14:08.223 "superblock": true, 00:14:08.223 "num_base_bdevs": 4, 00:14:08.223 "num_base_bdevs_discovered": 2, 00:14:08.223 "num_base_bdevs_operational": 2, 00:14:08.223 "base_bdevs_list": [ 00:14:08.223 { 00:14:08.223 "name": null, 00:14:08.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.223 "is_configured": false, 00:14:08.223 "data_offset": 0, 00:14:08.223 "data_size": 63488 00:14:08.223 }, 00:14:08.223 { 00:14:08.223 "name": null, 00:14:08.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.223 "is_configured": false, 00:14:08.223 "data_offset": 2048, 00:14:08.223 "data_size": 63488 00:14:08.223 }, 00:14:08.223 { 00:14:08.223 "name": "BaseBdev3", 00:14:08.223 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:08.223 "is_configured": true, 00:14:08.223 "data_offset": 2048, 00:14:08.223 "data_size": 63488 00:14:08.223 }, 00:14:08.223 { 00:14:08.223 "name": "BaseBdev4", 00:14:08.223 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:08.223 "is_configured": true, 00:14:08.223 "data_offset": 2048, 00:14:08.223 "data_size": 63488 00:14:08.223 } 00:14:08.223 ] 00:14:08.223 }' 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.223 17:06:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.804 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.805 "name": "raid_bdev1", 00:14:08.805 "uuid": "6bf1832b-eef8-4c6e-adaa-b6a5f248fcf1", 00:14:08.805 "strip_size_kb": 0, 00:14:08.805 "state": "online", 00:14:08.805 "raid_level": "raid1", 00:14:08.805 "superblock": true, 00:14:08.805 "num_base_bdevs": 4, 00:14:08.805 "num_base_bdevs_discovered": 2, 00:14:08.805 "num_base_bdevs_operational": 2, 00:14:08.805 "base_bdevs_list": [ 00:14:08.805 { 00:14:08.805 "name": null, 00:14:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.805 "is_configured": false, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 63488 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": null, 00:14:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.805 "is_configured": false, 00:14:08.805 "data_offset": 2048, 00:14:08.805 "data_size": 63488 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev3", 00:14:08.805 "uuid": "2c7c42ee-2721-52bb-a052-d81a61baa517", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 2048, 00:14:08.805 "data_size": 63488 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev4", 00:14:08.805 "uuid": "b85e5ef1-a42d-5359-b087-869d3094a7f6", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 2048, 00:14:08.805 "data_size": 63488 00:14:08.805 } 00:14:08.805 ] 00:14:08.805 }' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78061 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78061 ']' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78061 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78061 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.805 killing process with pid 78061 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78061' 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78061 00:14:08.805 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.805 00:14:08.805 Latency(us) 00:14:08.805 [2024-11-20T17:06:32.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.805 [2024-11-20T17:06:32.674Z] =================================================================================================================== 00:14:08.805 [2024-11-20T17:06:32.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.805 [2024-11-20 17:06:32.643227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.805 17:06:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78061 00:14:08.805 [2024-11-20 17:06:32.643400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.805 [2024-11-20 17:06:32.643488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.805 [2024-11-20 17:06:32.643505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:09.383 [2024-11-20 17:06:33.098398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.320 17:06:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.320 00:14:10.320 real 0m28.876s 00:14:10.320 user 0m35.166s 00:14:10.320 sys 0m3.954s 00:14:10.320 17:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.320 17:06:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 ************************************ 00:14:10.320 END TEST raid_rebuild_test_sb 00:14:10.320 ************************************ 00:14:10.578 17:06:34 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:10.578 17:06:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:10.578 17:06:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.578 17:06:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.578 ************************************ 00:14:10.578 START TEST raid_rebuild_test_io 00:14:10.578 ************************************ 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78849 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78849 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78849 ']' 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.578 17:06:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.578 [2024-11-20 17:06:34.349490] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:14:10.578 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.578 Zero copy mechanism will not be used. 00:14:10.578 [2024-11-20 17:06:34.349723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78849 ] 00:14:10.836 [2024-11-20 17:06:34.544673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.095 [2024-11-20 17:06:34.712588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.095 [2024-11-20 17:06:34.932213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.095 [2024-11-20 17:06:34.932275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 BaseBdev1_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 [2024-11-20 17:06:35.397390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.663 [2024-11-20 17:06:35.397464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.663 [2024-11-20 17:06:35.397495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.663 [2024-11-20 17:06:35.397514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.663 [2024-11-20 17:06:35.400476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.663 [2024-11-20 17:06:35.400597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.663 BaseBdev1 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 BaseBdev2_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 [2024-11-20 17:06:35.452253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.663 [2024-11-20 17:06:35.452355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.663 [2024-11-20 17:06:35.452386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.663 [2024-11-20 17:06:35.452413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.663 [2024-11-20 17:06:35.455257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.663 [2024-11-20 17:06:35.455302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.663 BaseBdev2 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.663 BaseBdev3_malloc 00:14:11.663 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.664 [2024-11-20 17:06:35.513272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.664 [2024-11-20 17:06:35.513366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.664 [2024-11-20 17:06:35.513395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.664 [2024-11-20 17:06:35.513413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.664 [2024-11-20 17:06:35.516228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.664 [2024-11-20 17:06:35.516302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.664 BaseBdev3 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.664 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 BaseBdev4_malloc 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 [2024-11-20 17:06:35.568591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:11.923 [2024-11-20 17:06:35.568704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.923 [2024-11-20 17:06:35.568731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:11.923 [2024-11-20 17:06:35.568748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.923 [2024-11-20 17:06:35.571390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.923 [2024-11-20 17:06:35.571468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:11.923 BaseBdev4 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 spare_malloc 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.923 spare_delay 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.923 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.924 [2024-11-20 17:06:35.626412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.924 [2024-11-20 17:06:35.626476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.924 [2024-11-20 17:06:35.626504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:11.924 [2024-11-20 17:06:35.626522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.924 [2024-11-20 17:06:35.629388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.924 [2024-11-20 17:06:35.629466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.924 spare 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.924 [2024-11-20 17:06:35.638483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.924 [2024-11-20 17:06:35.641041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.924 [2024-11-20 17:06:35.641133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.924 [2024-11-20 17:06:35.641219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.924 [2024-11-20 17:06:35.641333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.924 [2024-11-20 17:06:35.641367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:11.924 [2024-11-20 17:06:35.641700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:11.924 [2024-11-20 17:06:35.641946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.924 [2024-11-20 17:06:35.641976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:11.924 [2024-11-20 17:06:35.642164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.924 "name": "raid_bdev1", 00:14:11.924 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:11.924 "strip_size_kb": 0, 00:14:11.924 "state": "online", 00:14:11.924 "raid_level": "raid1", 00:14:11.924 "superblock": false, 00:14:11.924 "num_base_bdevs": 4, 00:14:11.924 "num_base_bdevs_discovered": 4, 00:14:11.924 "num_base_bdevs_operational": 4, 00:14:11.924 "base_bdevs_list": [ 00:14:11.924 { 00:14:11.924 "name": "BaseBdev1", 00:14:11.924 "uuid": "46477b26-67cc-5f94-a5c8-f02b452eaab3", 00:14:11.924 "is_configured": true, 00:14:11.924 "data_offset": 0, 00:14:11.924 "data_size": 65536 00:14:11.924 }, 00:14:11.924 { 00:14:11.924 "name": "BaseBdev2", 00:14:11.924 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:11.924 "is_configured": true, 00:14:11.924 "data_offset": 0, 00:14:11.924 "data_size": 65536 00:14:11.924 }, 00:14:11.924 { 00:14:11.924 "name": "BaseBdev3", 00:14:11.924 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:11.924 "is_configured": true, 00:14:11.924 "data_offset": 0, 00:14:11.924 "data_size": 65536 00:14:11.924 }, 00:14:11.924 { 00:14:11.924 "name": "BaseBdev4", 00:14:11.924 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:11.924 "is_configured": true, 00:14:11.924 "data_offset": 0, 00:14:11.924 "data_size": 65536 00:14:11.924 } 00:14:11.924 ] 00:14:11.924 }' 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.924 17:06:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.491 [2024-11-20 17:06:36.155146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.491 [2024-11-20 17:06:36.250668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.491 "name": "raid_bdev1", 00:14:12.491 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:12.491 "strip_size_kb": 0, 00:14:12.491 "state": "online", 00:14:12.491 "raid_level": "raid1", 00:14:12.491 "superblock": false, 00:14:12.491 "num_base_bdevs": 4, 00:14:12.491 "num_base_bdevs_discovered": 3, 00:14:12.491 "num_base_bdevs_operational": 3, 00:14:12.491 "base_bdevs_list": [ 00:14:12.491 { 00:14:12.491 "name": null, 00:14:12.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.491 "is_configured": false, 00:14:12.491 "data_offset": 0, 00:14:12.491 "data_size": 65536 00:14:12.491 }, 00:14:12.491 { 00:14:12.491 "name": "BaseBdev2", 00:14:12.491 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:12.491 "is_configured": true, 00:14:12.491 "data_offset": 0, 00:14:12.491 "data_size": 65536 00:14:12.491 }, 00:14:12.491 { 00:14:12.491 "name": "BaseBdev3", 00:14:12.491 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:12.491 "is_configured": true, 00:14:12.491 "data_offset": 0, 00:14:12.491 "data_size": 65536 00:14:12.491 }, 00:14:12.491 { 00:14:12.491 "name": "BaseBdev4", 00:14:12.491 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:12.491 "is_configured": true, 00:14:12.491 "data_offset": 0, 00:14:12.491 "data_size": 65536 00:14:12.491 } 00:14:12.491 ] 00:14:12.491 }' 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.491 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.750 [2024-11-20 17:06:36.383489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:12.750 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.750 Zero copy mechanism will not be used. 00:14:12.750 Running I/O for 60 seconds... 00:14:13.009 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.009 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.009 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.009 [2024-11-20 17:06:36.790398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.009 17:06:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.009 17:06:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.009 [2024-11-20 17:06:36.868507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:13.009 [2024-11-20 17:06:36.871411] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.268 [2024-11-20 17:06:37.019712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:13.527 [2024-11-20 17:06:37.261481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:13.527 [2024-11-20 17:06:37.262616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.046 116.00 IOPS, 348.00 MiB/s [2024-11-20T17:06:37.915Z] [2024-11-20 17:06:37.663345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.046 "name": "raid_bdev1", 00:14:14.046 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:14.046 "strip_size_kb": 0, 00:14:14.046 "state": "online", 00:14:14.046 "raid_level": "raid1", 00:14:14.046 "superblock": false, 00:14:14.046 "num_base_bdevs": 4, 00:14:14.046 "num_base_bdevs_discovered": 4, 00:14:14.046 "num_base_bdevs_operational": 4, 00:14:14.046 "process": { 00:14:14.046 "type": "rebuild", 00:14:14.046 "target": "spare", 00:14:14.046 "progress": { 00:14:14.046 "blocks": 8192, 00:14:14.046 "percent": 12 00:14:14.046 } 00:14:14.046 }, 00:14:14.046 "base_bdevs_list": [ 00:14:14.046 { 00:14:14.046 "name": "spare", 00:14:14.046 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:14.046 "is_configured": true, 00:14:14.046 "data_offset": 0, 00:14:14.046 "data_size": 65536 00:14:14.046 }, 00:14:14.046 { 00:14:14.046 "name": "BaseBdev2", 00:14:14.046 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:14.046 "is_configured": true, 00:14:14.046 "data_offset": 0, 00:14:14.046 "data_size": 65536 00:14:14.046 }, 00:14:14.046 { 00:14:14.046 "name": "BaseBdev3", 00:14:14.046 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:14.046 "is_configured": true, 00:14:14.046 "data_offset": 0, 00:14:14.046 "data_size": 65536 00:14:14.046 }, 00:14:14.046 { 00:14:14.046 "name": "BaseBdev4", 00:14:14.046 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:14.046 "is_configured": true, 00:14:14.046 "data_offset": 0, 00:14:14.046 "data_size": 65536 00:14:14.046 } 00:14:14.046 ] 00:14:14.046 }' 00:14:14.046 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.046 [2024-11-20 17:06:37.910654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.046 [2024-11-20 17:06:37.911675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.305 17:06:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.305 [2024-11-20 17:06:37.991876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.305 [2024-11-20 17:06:38.025445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.305 [2024-11-20 17:06:38.129074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.305 [2024-11-20 17:06:38.131514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.305 [2024-11-20 17:06:38.131587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.305 [2024-11-20 17:06:38.131603] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.305 [2024-11-20 17:06:38.160574] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.564 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.564 "name": "raid_bdev1", 00:14:14.564 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:14.564 "strip_size_kb": 0, 00:14:14.564 "state": "online", 00:14:14.564 "raid_level": "raid1", 00:14:14.564 "superblock": false, 00:14:14.564 "num_base_bdevs": 4, 00:14:14.564 "num_base_bdevs_discovered": 3, 00:14:14.564 "num_base_bdevs_operational": 3, 00:14:14.564 "base_bdevs_list": [ 00:14:14.564 { 00:14:14.564 "name": null, 00:14:14.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.564 "is_configured": false, 00:14:14.564 "data_offset": 0, 00:14:14.564 "data_size": 65536 00:14:14.564 }, 00:14:14.564 { 00:14:14.564 "name": "BaseBdev2", 00:14:14.564 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:14.564 "is_configured": true, 00:14:14.564 "data_offset": 0, 00:14:14.564 "data_size": 65536 00:14:14.564 }, 00:14:14.564 { 00:14:14.564 "name": "BaseBdev3", 00:14:14.564 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:14.564 "is_configured": true, 00:14:14.564 "data_offset": 0, 00:14:14.564 "data_size": 65536 00:14:14.564 }, 00:14:14.564 { 00:14:14.565 "name": "BaseBdev4", 00:14:14.565 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:14.565 "is_configured": true, 00:14:14.565 "data_offset": 0, 00:14:14.565 "data_size": 65536 00:14:14.565 } 00:14:14.565 ] 00:14:14.565 }' 00:14:14.565 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.565 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.132 91.50 IOPS, 274.50 MiB/s [2024-11-20T17:06:39.001Z] 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.132 "name": "raid_bdev1", 00:14:15.132 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:15.132 "strip_size_kb": 0, 00:14:15.132 "state": "online", 00:14:15.132 "raid_level": "raid1", 00:14:15.132 "superblock": false, 00:14:15.132 "num_base_bdevs": 4, 00:14:15.132 "num_base_bdevs_discovered": 3, 00:14:15.132 "num_base_bdevs_operational": 3, 00:14:15.132 "base_bdevs_list": [ 00:14:15.132 { 00:14:15.132 "name": null, 00:14:15.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.132 "is_configured": false, 00:14:15.132 "data_offset": 0, 00:14:15.132 "data_size": 65536 00:14:15.132 }, 00:14:15.132 { 00:14:15.132 "name": "BaseBdev2", 00:14:15.132 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:15.132 "is_configured": true, 00:14:15.132 "data_offset": 0, 00:14:15.132 "data_size": 65536 00:14:15.132 }, 00:14:15.132 { 00:14:15.132 "name": "BaseBdev3", 00:14:15.132 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:15.132 "is_configured": true, 00:14:15.132 "data_offset": 0, 00:14:15.132 "data_size": 65536 00:14:15.132 }, 00:14:15.132 { 00:14:15.132 "name": "BaseBdev4", 00:14:15.132 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:15.132 "is_configured": true, 00:14:15.132 "data_offset": 0, 00:14:15.132 "data_size": 65536 00:14:15.132 } 00:14:15.132 ] 00:14:15.132 }' 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.132 [2024-11-20 17:06:38.879910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.132 17:06:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.132 [2024-11-20 17:06:38.972560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:15.132 [2024-11-20 17:06:38.975363] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.391 [2024-11-20 17:06:39.077855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.391 [2024-11-20 17:06:39.078588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.650 [2024-11-20 17:06:39.310849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.650 [2024-11-20 17:06:39.311968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.909 118.33 IOPS, 355.00 MiB/s [2024-11-20T17:06:39.778Z] [2024-11-20 17:06:39.664371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:15.909 [2024-11-20 17:06:39.665074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.169 [2024-11-20 17:06:39.881678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.169 "name": "raid_bdev1", 00:14:16.169 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:16.169 "strip_size_kb": 0, 00:14:16.169 "state": "online", 00:14:16.169 "raid_level": "raid1", 00:14:16.169 "superblock": false, 00:14:16.169 "num_base_bdevs": 4, 00:14:16.169 "num_base_bdevs_discovered": 4, 00:14:16.169 "num_base_bdevs_operational": 4, 00:14:16.169 "process": { 00:14:16.169 "type": "rebuild", 00:14:16.169 "target": "spare", 00:14:16.169 "progress": { 00:14:16.169 "blocks": 10240, 00:14:16.169 "percent": 15 00:14:16.169 } 00:14:16.169 }, 00:14:16.169 "base_bdevs_list": [ 00:14:16.169 { 00:14:16.169 "name": "spare", 00:14:16.169 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:16.169 "is_configured": true, 00:14:16.169 "data_offset": 0, 00:14:16.169 "data_size": 65536 00:14:16.169 }, 00:14:16.169 { 00:14:16.169 "name": "BaseBdev2", 00:14:16.169 "uuid": "e2ca1df1-2c56-5101-95d9-4ec5dc68020e", 00:14:16.169 "is_configured": true, 00:14:16.169 "data_offset": 0, 00:14:16.169 "data_size": 65536 00:14:16.169 }, 00:14:16.169 { 00:14:16.169 "name": "BaseBdev3", 00:14:16.169 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:16.169 "is_configured": true, 00:14:16.169 "data_offset": 0, 00:14:16.169 "data_size": 65536 00:14:16.169 }, 00:14:16.169 { 00:14:16.169 "name": "BaseBdev4", 00:14:16.169 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:16.169 "is_configured": true, 00:14:16.169 "data_offset": 0, 00:14:16.169 "data_size": 65536 00:14:16.169 } 00:14:16.169 ] 00:14:16.169 }' 00:14:16.169 17:06:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.169 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.169 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.429 [2024-11-20 17:06:40.089629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.429 [2024-11-20 17:06:40.244781] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:16.429 [2024-11-20 17:06:40.244866] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.429 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.688 "name": "raid_bdev1", 00:14:16.688 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:16.688 "strip_size_kb": 0, 00:14:16.688 "state": "online", 00:14:16.688 "raid_level": "raid1", 00:14:16.688 "superblock": false, 00:14:16.688 "num_base_bdevs": 4, 00:14:16.688 "num_base_bdevs_discovered": 3, 00:14:16.688 "num_base_bdevs_operational": 3, 00:14:16.688 "process": { 00:14:16.688 "type": "rebuild", 00:14:16.688 "target": "spare", 00:14:16.688 "progress": { 00:14:16.688 "blocks": 12288, 00:14:16.688 "percent": 18 00:14:16.688 } 00:14:16.688 }, 00:14:16.688 "base_bdevs_list": [ 00:14:16.688 { 00:14:16.688 "name": "spare", 00:14:16.688 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": null, 00:14:16.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.688 "is_configured": false, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": "BaseBdev3", 00:14:16.688 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": "BaseBdev4", 00:14:16.688 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 } 00:14:16.688 ] 00:14:16.688 }' 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.688 [2024-11-20 17:06:40.383763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:16.688 112.25 IOPS, 336.75 MiB/s [2024-11-20T17:06:40.557Z] 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=516 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.688 "name": "raid_bdev1", 00:14:16.688 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:16.688 "strip_size_kb": 0, 00:14:16.688 "state": "online", 00:14:16.688 "raid_level": "raid1", 00:14:16.688 "superblock": false, 00:14:16.688 "num_base_bdevs": 4, 00:14:16.688 "num_base_bdevs_discovered": 3, 00:14:16.688 "num_base_bdevs_operational": 3, 00:14:16.688 "process": { 00:14:16.688 "type": "rebuild", 00:14:16.688 "target": "spare", 00:14:16.688 "progress": { 00:14:16.688 "blocks": 14336, 00:14:16.688 "percent": 21 00:14:16.688 } 00:14:16.688 }, 00:14:16.688 "base_bdevs_list": [ 00:14:16.688 { 00:14:16.688 "name": "spare", 00:14:16.688 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": null, 00:14:16.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.688 "is_configured": false, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": "BaseBdev3", 00:14:16.688 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 }, 00:14:16.688 { 00:14:16.688 "name": "BaseBdev4", 00:14:16.688 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:16.688 "is_configured": true, 00:14:16.688 "data_offset": 0, 00:14:16.688 "data_size": 65536 00:14:16.688 } 00:14:16.688 ] 00:14:16.688 }' 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.688 [2024-11-20 17:06:40.493995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.688 [2024-11-20 17:06:40.494225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.688 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.947 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.947 17:06:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.205 [2024-11-20 17:06:40.848437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:17.205 [2024-11-20 17:06:41.002292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:17.464 [2024-11-20 17:06:41.245842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:17.723 99.80 IOPS, 299.40 MiB/s [2024-11-20T17:06:41.592Z] [2024-11-20 17:06:41.463084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.723 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.007 17:06:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.007 17:06:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.007 17:06:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.007 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.007 "name": "raid_bdev1", 00:14:18.007 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:18.007 "strip_size_kb": 0, 00:14:18.007 "state": "online", 00:14:18.007 "raid_level": "raid1", 00:14:18.007 "superblock": false, 00:14:18.007 "num_base_bdevs": 4, 00:14:18.007 "num_base_bdevs_discovered": 3, 00:14:18.007 "num_base_bdevs_operational": 3, 00:14:18.007 "process": { 00:14:18.007 "type": "rebuild", 00:14:18.007 "target": "spare", 00:14:18.007 "progress": { 00:14:18.007 "blocks": 30720, 00:14:18.007 "percent": 46 00:14:18.007 } 00:14:18.007 }, 00:14:18.007 "base_bdevs_list": [ 00:14:18.007 { 00:14:18.007 "name": "spare", 00:14:18.007 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:18.007 "is_configured": true, 00:14:18.007 "data_offset": 0, 00:14:18.007 "data_size": 65536 00:14:18.007 }, 00:14:18.007 { 00:14:18.007 "name": null, 00:14:18.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.007 "is_configured": false, 00:14:18.007 "data_offset": 0, 00:14:18.007 "data_size": 65536 00:14:18.007 }, 00:14:18.007 { 00:14:18.007 "name": "BaseBdev3", 00:14:18.007 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:18.007 "is_configured": true, 00:14:18.007 "data_offset": 0, 00:14:18.007 "data_size": 65536 00:14:18.007 }, 00:14:18.007 { 00:14:18.007 "name": "BaseBdev4", 00:14:18.007 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:18.007 "is_configured": true, 00:14:18.007 "data_offset": 0, 00:14:18.007 "data_size": 65536 00:14:18.007 } 00:14:18.007 ] 00:14:18.007 }' 00:14:18.008 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.008 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.008 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.008 [2024-11-20 17:06:41.714230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:18.008 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.008 17:06:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.008 [2024-11-20 17:06:41.834852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:19.142 93.50 IOPS, 280.50 MiB/s [2024-11-20T17:06:43.011Z] 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.142 "name": "raid_bdev1", 00:14:19.142 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:19.142 "strip_size_kb": 0, 00:14:19.142 "state": "online", 00:14:19.142 "raid_level": "raid1", 00:14:19.142 "superblock": false, 00:14:19.142 "num_base_bdevs": 4, 00:14:19.142 "num_base_bdevs_discovered": 3, 00:14:19.142 "num_base_bdevs_operational": 3, 00:14:19.142 "process": { 00:14:19.142 "type": "rebuild", 00:14:19.142 "target": "spare", 00:14:19.142 "progress": { 00:14:19.142 "blocks": 51200, 00:14:19.142 "percent": 78 00:14:19.142 } 00:14:19.142 }, 00:14:19.142 "base_bdevs_list": [ 00:14:19.142 { 00:14:19.142 "name": "spare", 00:14:19.142 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:19.142 "is_configured": true, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 65536 00:14:19.142 }, 00:14:19.142 { 00:14:19.142 "name": null, 00:14:19.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.142 "is_configured": false, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 65536 00:14:19.142 }, 00:14:19.142 { 00:14:19.142 "name": "BaseBdev3", 00:14:19.142 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:19.142 "is_configured": true, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 65536 00:14:19.142 }, 00:14:19.142 { 00:14:19.142 "name": "BaseBdev4", 00:14:19.142 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:19.142 "is_configured": true, 00:14:19.142 "data_offset": 0, 00:14:19.142 "data_size": 65536 00:14:19.142 } 00:14:19.142 ] 00:14:19.142 }' 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.142 [2024-11-20 17:06:42.829735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.142 17:06:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.709 [2024-11-20 17:06:43.291636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:19.968 87.43 IOPS, 262.29 MiB/s [2024-11-20T17:06:43.837Z] [2024-11-20 17:06:43.633949] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.968 [2024-11-20 17:06:43.733931] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.968 [2024-11-20 17:06:43.736881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.226 "name": "raid_bdev1", 00:14:20.226 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:20.226 "strip_size_kb": 0, 00:14:20.226 "state": "online", 00:14:20.226 "raid_level": "raid1", 00:14:20.226 "superblock": false, 00:14:20.226 "num_base_bdevs": 4, 00:14:20.226 "num_base_bdevs_discovered": 3, 00:14:20.226 "num_base_bdevs_operational": 3, 00:14:20.226 "base_bdevs_list": [ 00:14:20.226 { 00:14:20.226 "name": "spare", 00:14:20.226 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:20.226 "is_configured": true, 00:14:20.226 "data_offset": 0, 00:14:20.226 "data_size": 65536 00:14:20.226 }, 00:14:20.226 { 00:14:20.226 "name": null, 00:14:20.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.226 "is_configured": false, 00:14:20.226 "data_offset": 0, 00:14:20.226 "data_size": 65536 00:14:20.226 }, 00:14:20.226 { 00:14:20.226 "name": "BaseBdev3", 00:14:20.226 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:20.226 "is_configured": true, 00:14:20.226 "data_offset": 0, 00:14:20.226 "data_size": 65536 00:14:20.226 }, 00:14:20.226 { 00:14:20.226 "name": "BaseBdev4", 00:14:20.226 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:20.226 "is_configured": true, 00:14:20.226 "data_offset": 0, 00:14:20.226 "data_size": 65536 00:14:20.226 } 00:14:20.226 ] 00:14:20.226 }' 00:14:20.226 17:06:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.226 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.484 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.484 "name": "raid_bdev1", 00:14:20.484 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:20.484 "strip_size_kb": 0, 00:14:20.484 "state": "online", 00:14:20.484 "raid_level": "raid1", 00:14:20.484 "superblock": false, 00:14:20.484 "num_base_bdevs": 4, 00:14:20.484 "num_base_bdevs_discovered": 3, 00:14:20.484 "num_base_bdevs_operational": 3, 00:14:20.484 "base_bdevs_list": [ 00:14:20.484 { 00:14:20.484 "name": "spare", 00:14:20.484 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:20.484 "is_configured": true, 00:14:20.484 "data_offset": 0, 00:14:20.484 "data_size": 65536 00:14:20.484 }, 00:14:20.484 { 00:14:20.484 "name": null, 00:14:20.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.484 "is_configured": false, 00:14:20.484 "data_offset": 0, 00:14:20.484 "data_size": 65536 00:14:20.484 }, 00:14:20.484 { 00:14:20.484 "name": "BaseBdev3", 00:14:20.485 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:20.485 "is_configured": true, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 }, 00:14:20.485 { 00:14:20.485 "name": "BaseBdev4", 00:14:20.485 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:20.485 "is_configured": true, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 } 00:14:20.485 ] 00:14:20.485 }' 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.485 "name": "raid_bdev1", 00:14:20.485 "uuid": "42a10883-91b6-4909-92fd-41727e37069f", 00:14:20.485 "strip_size_kb": 0, 00:14:20.485 "state": "online", 00:14:20.485 "raid_level": "raid1", 00:14:20.485 "superblock": false, 00:14:20.485 "num_base_bdevs": 4, 00:14:20.485 "num_base_bdevs_discovered": 3, 00:14:20.485 "num_base_bdevs_operational": 3, 00:14:20.485 "base_bdevs_list": [ 00:14:20.485 { 00:14:20.485 "name": "spare", 00:14:20.485 "uuid": "033e1996-8ce0-525f-94e5-b87efb536167", 00:14:20.485 "is_configured": true, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 }, 00:14:20.485 { 00:14:20.485 "name": null, 00:14:20.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.485 "is_configured": false, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 }, 00:14:20.485 { 00:14:20.485 "name": "BaseBdev3", 00:14:20.485 "uuid": "cd9d9eeb-80a3-5e3a-800c-b30a5d08b800", 00:14:20.485 "is_configured": true, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 }, 00:14:20.485 { 00:14:20.485 "name": "BaseBdev4", 00:14:20.485 "uuid": "15d03e10-2f83-5b95-a2e1-54697d045ca3", 00:14:20.485 "is_configured": true, 00:14:20.485 "data_offset": 0, 00:14:20.485 "data_size": 65536 00:14:20.485 } 00:14:20.485 ] 00:14:20.485 }' 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.485 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.004 80.62 IOPS, 241.88 MiB/s [2024-11-20T17:06:44.873Z] 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.004 [2024-11-20 17:06:44.747445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.004 [2024-11-20 17:06:44.747478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.004 00:14:21.004 Latency(us) 00:14:21.004 [2024-11-20T17:06:44.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.004 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.004 raid_bdev1 : 8.45 77.64 232.93 0.00 0.00 16999.96 251.35 121539.49 00:14:21.004 [2024-11-20T17:06:44.873Z] =================================================================================================================== 00:14:21.004 [2024-11-20T17:06:44.873Z] Total : 77.64 232.93 0.00 0.00 16999.96 251.35 121539.49 00:14:21.004 [2024-11-20 17:06:44.856642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.004 [2024-11-20 17:06:44.856747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.004 [2024-11-20 17:06:44.856924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.004 [2024-11-20 17:06:44.856946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.004 { 00:14:21.004 "results": [ 00:14:21.004 { 00:14:21.004 "job": "raid_bdev1", 00:14:21.004 "core_mask": "0x1", 00:14:21.004 "workload": "randrw", 00:14:21.004 "percentage": 50, 00:14:21.004 "status": "finished", 00:14:21.004 "queue_depth": 2, 00:14:21.004 "io_size": 3145728, 00:14:21.004 "runtime": 8.448867, 00:14:21.004 "iops": 77.64354676195045, 00:14:21.004 "mibps": 232.93064028585133, 00:14:21.004 "io_failed": 0, 00:14:21.004 "io_timeout": 0, 00:14:21.004 "avg_latency_us": 16999.962394678492, 00:14:21.004 "min_latency_us": 251.34545454545454, 00:14:21.004 "max_latency_us": 121539.4909090909 00:14:21.004 } 00:14:21.004 ], 00:14:21.004 "core_count": 1 00:14:21.004 } 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:21.004 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.264 17:06:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:21.523 /dev/nbd0 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.523 1+0 records in 00:14:21.523 1+0 records out 00:14:21.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036294 s, 11.3 MB/s 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.523 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.524 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:21.783 /dev/nbd1 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.783 1+0 records in 00:14:21.783 1+0 records out 00:14:21.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385008 s, 10.6 MB/s 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:21.783 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.042 17:06:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.301 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:22.869 /dev/nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.869 1+0 records in 00:14:22.869 1+0 records out 00:14:22.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419241 s, 9.8 MB/s 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.869 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.128 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.129 17:06:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78849 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78849 ']' 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78849 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78849 00:14:23.388 killing process with pid 78849 00:14:23.388 Received shutdown signal, test time was about 10.818168 seconds 00:14:23.388 00:14:23.388 Latency(us) 00:14:23.388 [2024-11-20T17:06:47.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.388 [2024-11-20T17:06:47.257Z] =================================================================================================================== 00:14:23.388 [2024-11-20T17:06:47.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78849' 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78849 00:14:23.388 17:06:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78849 00:14:23.388 [2024-11-20 17:06:47.204682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.956 [2024-11-20 17:06:47.572148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:24.893 ************************************ 00:14:24.893 END TEST raid_rebuild_test_io 00:14:24.893 ************************************ 00:14:24.893 00:14:24.893 real 0m14.412s 00:14:24.893 user 0m19.034s 00:14:24.893 sys 0m1.807s 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.893 17:06:48 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:24.893 17:06:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:24.893 17:06:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.893 17:06:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.893 ************************************ 00:14:24.893 START TEST raid_rebuild_test_sb_io 00:14:24.893 ************************************ 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79269 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79269 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79269 ']' 00:14:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.893 17:06:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.153 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:25.153 Zero copy mechanism will not be used. 00:14:25.153 [2024-11-20 17:06:48.814784] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:14:25.153 [2024-11-20 17:06:48.814973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79269 ] 00:14:25.153 [2024-11-20 17:06:49.001246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.412 [2024-11-20 17:06:49.124883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.670 [2024-11-20 17:06:49.321856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.670 [2024-11-20 17:06:49.321919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.975 BaseBdev1_malloc 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.975 [2024-11-20 17:06:49.789716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:25.975 [2024-11-20 17:06:49.789987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.975 [2024-11-20 17:06:49.790064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:25.975 [2024-11-20 17:06:49.790289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.975 [2024-11-20 17:06:49.793188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.975 [2024-11-20 17:06:49.793237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.975 BaseBdev1 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.975 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 BaseBdev2_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 [2024-11-20 17:06:49.842161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:26.233 [2024-11-20 17:06:49.842429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.233 [2024-11-20 17:06:49.842503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.233 [2024-11-20 17:06:49.842672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.233 [2024-11-20 17:06:49.845680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.233 [2024-11-20 17:06:49.845739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:26.233 BaseBdev2 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 BaseBdev3_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 [2024-11-20 17:06:49.900394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:26.233 [2024-11-20 17:06:49.900635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.233 [2024-11-20 17:06:49.900711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:26.233 [2024-11-20 17:06:49.900962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.233 [2024-11-20 17:06:49.903922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.233 BaseBdev3 00:14:26.233 [2024-11-20 17:06:49.904118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 BaseBdev4_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 [2024-11-20 17:06:49.946460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:26.233 [2024-11-20 17:06:49.946691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.233 [2024-11-20 17:06:49.946792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:26.233 [2024-11-20 17:06:49.946974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.233 [2024-11-20 17:06:49.949785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.233 [2024-11-20 17:06:49.949996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:26.233 BaseBdev4 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 spare_malloc 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.233 17:06:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.233 spare_delay 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.234 [2024-11-20 17:06:50.008389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.234 [2024-11-20 17:06:50.008678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.234 [2024-11-20 17:06:50.008749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:26.234 [2024-11-20 17:06:50.008888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.234 [2024-11-20 17:06:50.011874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.234 spare 00:14:26.234 [2024-11-20 17:06:50.012029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.234 [2024-11-20 17:06:50.016669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.234 [2024-11-20 17:06:50.019343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.234 [2024-11-20 17:06:50.019490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.234 [2024-11-20 17:06:50.019592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.234 [2024-11-20 17:06:50.019840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:26.234 [2024-11-20 17:06:50.019889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.234 [2024-11-20 17:06:50.020265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:26.234 [2024-11-20 17:06:50.020556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:26.234 [2024-11-20 17:06:50.020571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:26.234 [2024-11-20 17:06:50.020799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.234 "name": "raid_bdev1", 00:14:26.234 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:26.234 "strip_size_kb": 0, 00:14:26.234 "state": "online", 00:14:26.234 "raid_level": "raid1", 00:14:26.234 "superblock": true, 00:14:26.234 "num_base_bdevs": 4, 00:14:26.234 "num_base_bdevs_discovered": 4, 00:14:26.234 "num_base_bdevs_operational": 4, 00:14:26.234 "base_bdevs_list": [ 00:14:26.234 { 00:14:26.234 "name": "BaseBdev1", 00:14:26.234 "uuid": "15f91feb-9fc7-5f17-8487-c0d3806abae2", 00:14:26.234 "is_configured": true, 00:14:26.234 "data_offset": 2048, 00:14:26.234 "data_size": 63488 00:14:26.234 }, 00:14:26.234 { 00:14:26.234 "name": "BaseBdev2", 00:14:26.234 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:26.234 "is_configured": true, 00:14:26.234 "data_offset": 2048, 00:14:26.234 "data_size": 63488 00:14:26.234 }, 00:14:26.234 { 00:14:26.234 "name": "BaseBdev3", 00:14:26.234 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:26.234 "is_configured": true, 00:14:26.234 "data_offset": 2048, 00:14:26.234 "data_size": 63488 00:14:26.234 }, 00:14:26.234 { 00:14:26.234 "name": "BaseBdev4", 00:14:26.234 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:26.234 "is_configured": true, 00:14:26.234 "data_offset": 2048, 00:14:26.234 "data_size": 63488 00:14:26.234 } 00:14:26.234 ] 00:14:26.234 }' 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.234 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.800 [2024-11-20 17:06:50.533426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.800 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.801 [2024-11-20 17:06:50.640976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.801 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.058 "name": "raid_bdev1", 00:14:27.058 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:27.058 "strip_size_kb": 0, 00:14:27.058 "state": "online", 00:14:27.058 "raid_level": "raid1", 00:14:27.058 "superblock": true, 00:14:27.058 "num_base_bdevs": 4, 00:14:27.058 "num_base_bdevs_discovered": 3, 00:14:27.058 "num_base_bdevs_operational": 3, 00:14:27.058 "base_bdevs_list": [ 00:14:27.058 { 00:14:27.058 "name": null, 00:14:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.058 "is_configured": false, 00:14:27.058 "data_offset": 0, 00:14:27.058 "data_size": 63488 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": "BaseBdev2", 00:14:27.058 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": "BaseBdev3", 00:14:27.058 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": "BaseBdev4", 00:14:27.058 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 } 00:14:27.058 ] 00:14:27.058 }' 00:14:27.058 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.058 17:06:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.058 Zero copy mechanism will not be used. 00:14:27.058 Running I/O for 60 seconds... 00:14:27.058 [2024-11-20 17:06:50.777487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:27.316 17:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.316 17:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.316 17:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 [2024-11-20 17:06:51.181137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.574 17:06:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.574 17:06:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:27.574 [2024-11-20 17:06:51.226487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:27.574 [2024-11-20 17:06:51.228990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.574 [2024-11-20 17:06:51.355386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.574 [2024-11-20 17:06:51.357271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.831 [2024-11-20 17:06:51.596885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.831 [2024-11-20 17:06:51.597296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.089 157.00 IOPS, 471.00 MiB/s [2024-11-20T17:06:51.958Z] [2024-11-20 17:06:51.852684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:28.089 [2024-11-20 17:06:51.853479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:28.347 [2024-11-20 17:06:51.999921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:28.347 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.347 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.347 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.347 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.606 "name": "raid_bdev1", 00:14:28.606 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:28.606 "strip_size_kb": 0, 00:14:28.606 "state": "online", 00:14:28.606 "raid_level": "raid1", 00:14:28.606 "superblock": true, 00:14:28.606 "num_base_bdevs": 4, 00:14:28.606 "num_base_bdevs_discovered": 4, 00:14:28.606 "num_base_bdevs_operational": 4, 00:14:28.606 "process": { 00:14:28.606 "type": "rebuild", 00:14:28.606 "target": "spare", 00:14:28.606 "progress": { 00:14:28.606 "blocks": 12288, 00:14:28.606 "percent": 19 00:14:28.606 } 00:14:28.606 }, 00:14:28.606 "base_bdevs_list": [ 00:14:28.606 { 00:14:28.606 "name": "spare", 00:14:28.606 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:28.606 "is_configured": true, 00:14:28.606 "data_offset": 2048, 00:14:28.606 "data_size": 63488 00:14:28.606 }, 00:14:28.606 { 00:14:28.606 "name": "BaseBdev2", 00:14:28.606 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:28.606 "is_configured": true, 00:14:28.606 "data_offset": 2048, 00:14:28.606 "data_size": 63488 00:14:28.606 }, 00:14:28.606 { 00:14:28.606 "name": "BaseBdev3", 00:14:28.606 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:28.606 "is_configured": true, 00:14:28.606 "data_offset": 2048, 00:14:28.606 "data_size": 63488 00:14:28.606 }, 00:14:28.606 { 00:14:28.606 "name": "BaseBdev4", 00:14:28.606 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:28.606 "is_configured": true, 00:14:28.606 "data_offset": 2048, 00:14:28.606 "data_size": 63488 00:14:28.606 } 00:14:28.606 ] 00:14:28.606 }' 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.606 [2024-11-20 17:06:52.328210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.606 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.607 [2024-11-20 17:06:52.377267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.607 [2024-11-20 17:06:52.441216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:28.866 [2024-11-20 17:06:52.544548] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.866 [2024-11-20 17:06:52.556268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.866 [2024-11-20 17:06:52.556332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.866 [2024-11-20 17:06:52.556347] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.866 [2024-11-20 17:06:52.595013] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.866 "name": "raid_bdev1", 00:14:28.866 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:28.866 "strip_size_kb": 0, 00:14:28.866 "state": "online", 00:14:28.866 "raid_level": "raid1", 00:14:28.866 "superblock": true, 00:14:28.866 "num_base_bdevs": 4, 00:14:28.866 "num_base_bdevs_discovered": 3, 00:14:28.866 "num_base_bdevs_operational": 3, 00:14:28.866 "base_bdevs_list": [ 00:14:28.866 { 00:14:28.866 "name": null, 00:14:28.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.866 "is_configured": false, 00:14:28.866 "data_offset": 0, 00:14:28.866 "data_size": 63488 00:14:28.866 }, 00:14:28.866 { 00:14:28.866 "name": "BaseBdev2", 00:14:28.866 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:28.866 "is_configured": true, 00:14:28.866 "data_offset": 2048, 00:14:28.866 "data_size": 63488 00:14:28.866 }, 00:14:28.866 { 00:14:28.866 "name": "BaseBdev3", 00:14:28.866 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:28.866 "is_configured": true, 00:14:28.866 "data_offset": 2048, 00:14:28.866 "data_size": 63488 00:14:28.866 }, 00:14:28.866 { 00:14:28.866 "name": "BaseBdev4", 00:14:28.866 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:28.866 "is_configured": true, 00:14:28.866 "data_offset": 2048, 00:14:28.866 "data_size": 63488 00:14:28.866 } 00:14:28.866 ] 00:14:28.866 }' 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.866 17:06:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.384 122.00 IOPS, 366.00 MiB/s [2024-11-20T17:06:53.253Z] 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.384 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.385 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.385 "name": "raid_bdev1", 00:14:29.385 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:29.385 "strip_size_kb": 0, 00:14:29.385 "state": "online", 00:14:29.385 "raid_level": "raid1", 00:14:29.385 "superblock": true, 00:14:29.385 "num_base_bdevs": 4, 00:14:29.385 "num_base_bdevs_discovered": 3, 00:14:29.385 "num_base_bdevs_operational": 3, 00:14:29.385 "base_bdevs_list": [ 00:14:29.385 { 00:14:29.385 "name": null, 00:14:29.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.385 "is_configured": false, 00:14:29.385 "data_offset": 0, 00:14:29.385 "data_size": 63488 00:14:29.385 }, 00:14:29.385 { 00:14:29.385 "name": "BaseBdev2", 00:14:29.385 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:29.385 "is_configured": true, 00:14:29.385 "data_offset": 2048, 00:14:29.385 "data_size": 63488 00:14:29.385 }, 00:14:29.385 { 00:14:29.385 "name": "BaseBdev3", 00:14:29.385 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:29.385 "is_configured": true, 00:14:29.385 "data_offset": 2048, 00:14:29.385 "data_size": 63488 00:14:29.385 }, 00:14:29.385 { 00:14:29.385 "name": "BaseBdev4", 00:14:29.385 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:29.385 "is_configured": true, 00:14:29.385 "data_offset": 2048, 00:14:29.385 "data_size": 63488 00:14:29.385 } 00:14:29.385 ] 00:14:29.385 }' 00:14:29.385 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.385 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.385 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.644 [2024-11-20 17:06:53.296690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.644 17:06:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:29.644 [2024-11-20 17:06:53.367922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:29.644 [2024-11-20 17:06:53.370669] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.644 [2024-11-20 17:06:53.472257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:29.644 [2024-11-20 17:06:53.472950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:29.902 [2024-11-20 17:06:53.607495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.160 136.00 IOPS, 408.00 MiB/s [2024-11-20T17:06:54.029Z] [2024-11-20 17:06:53.939605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:30.160 [2024-11-20 17:06:53.940105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:30.418 [2024-11-20 17:06:54.162379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:30.418 [2024-11-20 17:06:54.162689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.678 "name": "raid_bdev1", 00:14:30.678 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:30.678 "strip_size_kb": 0, 00:14:30.678 "state": "online", 00:14:30.678 "raid_level": "raid1", 00:14:30.678 "superblock": true, 00:14:30.678 "num_base_bdevs": 4, 00:14:30.678 "num_base_bdevs_discovered": 4, 00:14:30.678 "num_base_bdevs_operational": 4, 00:14:30.678 "process": { 00:14:30.678 "type": "rebuild", 00:14:30.678 "target": "spare", 00:14:30.678 "progress": { 00:14:30.678 "blocks": 10240, 00:14:30.678 "percent": 16 00:14:30.678 } 00:14:30.678 }, 00:14:30.678 "base_bdevs_list": [ 00:14:30.678 { 00:14:30.678 "name": "spare", 00:14:30.678 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:30.678 "is_configured": true, 00:14:30.678 "data_offset": 2048, 00:14:30.678 "data_size": 63488 00:14:30.678 }, 00:14:30.678 { 00:14:30.678 "name": "BaseBdev2", 00:14:30.678 "uuid": "aa3d076e-118b-5e67-9448-278cf9951711", 00:14:30.678 "is_configured": true, 00:14:30.678 "data_offset": 2048, 00:14:30.678 "data_size": 63488 00:14:30.678 }, 00:14:30.678 { 00:14:30.678 "name": "BaseBdev3", 00:14:30.678 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:30.678 "is_configured": true, 00:14:30.678 "data_offset": 2048, 00:14:30.678 "data_size": 63488 00:14:30.678 }, 00:14:30.678 { 00:14:30.678 "name": "BaseBdev4", 00:14:30.678 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:30.678 "is_configured": true, 00:14:30.678 "data_offset": 2048, 00:14:30.678 "data_size": 63488 00:14:30.678 } 00:14:30.678 ] 00:14:30.678 }' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.678 [2024-11-20 17:06:54.492328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:30.678 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.678 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.678 [2024-11-20 17:06:54.532849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.938 [2024-11-20 17:06:54.632734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:30.938 [2024-11-20 17:06:54.745838] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:30.938 [2024-11-20 17:06:54.746056] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.938 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.198 121.25 IOPS, 363.75 MiB/s [2024-11-20T17:06:55.067Z] 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.198 "name": "raid_bdev1", 00:14:31.198 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:31.198 "strip_size_kb": 0, 00:14:31.198 "state": "online", 00:14:31.198 "raid_level": "raid1", 00:14:31.198 "superblock": true, 00:14:31.198 "num_base_bdevs": 4, 00:14:31.198 "num_base_bdevs_discovered": 3, 00:14:31.198 "num_base_bdevs_operational": 3, 00:14:31.198 "process": { 00:14:31.198 "type": "rebuild", 00:14:31.198 "target": "spare", 00:14:31.198 "progress": { 00:14:31.198 "blocks": 16384, 00:14:31.198 "percent": 25 00:14:31.198 } 00:14:31.198 }, 00:14:31.198 "base_bdevs_list": [ 00:14:31.198 { 00:14:31.198 "name": "spare", 00:14:31.198 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": null, 00:14:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.198 "is_configured": false, 00:14:31.198 "data_offset": 0, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": "BaseBdev3", 00:14:31.198 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": "BaseBdev4", 00:14:31.198 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 } 00:14:31.198 ] 00:14:31.198 }' 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=530 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.198 [2024-11-20 17:06:54.977830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:31.198 [2024-11-20 17:06:54.978891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.198 "name": "raid_bdev1", 00:14:31.198 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:31.198 "strip_size_kb": 0, 00:14:31.198 "state": "online", 00:14:31.198 "raid_level": "raid1", 00:14:31.198 "superblock": true, 00:14:31.198 "num_base_bdevs": 4, 00:14:31.198 "num_base_bdevs_discovered": 3, 00:14:31.198 "num_base_bdevs_operational": 3, 00:14:31.198 "process": { 00:14:31.198 "type": "rebuild", 00:14:31.198 "target": "spare", 00:14:31.198 "progress": { 00:14:31.198 "blocks": 18432, 00:14:31.198 "percent": 29 00:14:31.198 } 00:14:31.198 }, 00:14:31.198 "base_bdevs_list": [ 00:14:31.198 { 00:14:31.198 "name": "spare", 00:14:31.198 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": null, 00:14:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.198 "is_configured": false, 00:14:31.198 "data_offset": 0, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": "BaseBdev3", 00:14:31.198 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 }, 00:14:31.198 { 00:14:31.198 "name": "BaseBdev4", 00:14:31.198 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:31.198 "is_configured": true, 00:14:31.198 "data_offset": 2048, 00:14:31.198 "data_size": 63488 00:14:31.198 } 00:14:31.198 ] 00:14:31.198 }' 00:14:31.198 17:06:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.198 17:06:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.198 17:06:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.457 17:06:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.457 17:06:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.457 [2024-11-20 17:06:55.234895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:31.457 [2024-11-20 17:06:55.235263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:32.281 107.00 IOPS, 321.00 MiB/s [2024-11-20T17:06:56.150Z] [2024-11-20 17:06:55.892427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.281 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.539 "name": "raid_bdev1", 00:14:32.539 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:32.539 "strip_size_kb": 0, 00:14:32.539 "state": "online", 00:14:32.539 "raid_level": "raid1", 00:14:32.539 "superblock": true, 00:14:32.539 "num_base_bdevs": 4, 00:14:32.539 "num_base_bdevs_discovered": 3, 00:14:32.539 "num_base_bdevs_operational": 3, 00:14:32.539 "process": { 00:14:32.539 "type": "rebuild", 00:14:32.539 "target": "spare", 00:14:32.539 "progress": { 00:14:32.539 "blocks": 34816, 00:14:32.539 "percent": 54 00:14:32.539 } 00:14:32.539 }, 00:14:32.539 "base_bdevs_list": [ 00:14:32.539 { 00:14:32.539 "name": "spare", 00:14:32.539 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:32.539 "is_configured": true, 00:14:32.539 "data_offset": 2048, 00:14:32.539 "data_size": 63488 00:14:32.539 }, 00:14:32.539 { 00:14:32.539 "name": null, 00:14:32.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.539 "is_configured": false, 00:14:32.539 "data_offset": 0, 00:14:32.539 "data_size": 63488 00:14:32.539 }, 00:14:32.539 { 00:14:32.539 "name": "BaseBdev3", 00:14:32.539 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:32.539 "is_configured": true, 00:14:32.539 "data_offset": 2048, 00:14:32.539 "data_size": 63488 00:14:32.539 }, 00:14:32.539 { 00:14:32.539 "name": "BaseBdev4", 00:14:32.539 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:32.539 "is_configured": true, 00:14:32.539 "data_offset": 2048, 00:14:32.539 "data_size": 63488 00:14:32.539 } 00:14:32.539 ] 00:14:32.539 }' 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.539 17:06:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.539 [2024-11-20 17:06:56.351579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:33.716 95.50 IOPS, 286.50 MiB/s [2024-11-20T17:06:57.585Z] 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.716 "name": "raid_bdev1", 00:14:33.716 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:33.716 "strip_size_kb": 0, 00:14:33.716 "state": "online", 00:14:33.716 "raid_level": "raid1", 00:14:33.716 "superblock": true, 00:14:33.716 "num_base_bdevs": 4, 00:14:33.716 "num_base_bdevs_discovered": 3, 00:14:33.716 "num_base_bdevs_operational": 3, 00:14:33.716 "process": { 00:14:33.716 "type": "rebuild", 00:14:33.716 "target": "spare", 00:14:33.716 "progress": { 00:14:33.716 "blocks": 55296, 00:14:33.716 "percent": 87 00:14:33.716 } 00:14:33.716 }, 00:14:33.716 "base_bdevs_list": [ 00:14:33.716 { 00:14:33.716 "name": "spare", 00:14:33.716 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:33.716 "is_configured": true, 00:14:33.716 "data_offset": 2048, 00:14:33.716 "data_size": 63488 00:14:33.716 }, 00:14:33.716 { 00:14:33.716 "name": null, 00:14:33.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.716 "is_configured": false, 00:14:33.716 "data_offset": 0, 00:14:33.716 "data_size": 63488 00:14:33.716 }, 00:14:33.716 { 00:14:33.716 "name": "BaseBdev3", 00:14:33.716 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:33.716 "is_configured": true, 00:14:33.716 "data_offset": 2048, 00:14:33.716 "data_size": 63488 00:14:33.716 }, 00:14:33.716 { 00:14:33.716 "name": "BaseBdev4", 00:14:33.716 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:33.716 "is_configured": true, 00:14:33.716 "data_offset": 2048, 00:14:33.716 "data_size": 63488 00:14:33.716 } 00:14:33.716 ] 00:14:33.716 }' 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.716 17:06:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.983 [2024-11-20 17:06:57.684931] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.983 [2024-11-20 17:06:57.784987] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.983 [2024-11-20 17:06:57.787505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.918 87.43 IOPS, 262.29 MiB/s [2024-11-20T17:06:58.787Z] 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.918 "name": "raid_bdev1", 00:14:34.918 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:34.918 "strip_size_kb": 0, 00:14:34.918 "state": "online", 00:14:34.918 "raid_level": "raid1", 00:14:34.918 "superblock": true, 00:14:34.918 "num_base_bdevs": 4, 00:14:34.918 "num_base_bdevs_discovered": 3, 00:14:34.918 "num_base_bdevs_operational": 3, 00:14:34.918 "base_bdevs_list": [ 00:14:34.918 { 00:14:34.918 "name": "spare", 00:14:34.918 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:34.918 "is_configured": true, 00:14:34.918 "data_offset": 2048, 00:14:34.918 "data_size": 63488 00:14:34.918 }, 00:14:34.918 { 00:14:34.918 "name": null, 00:14:34.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.918 "is_configured": false, 00:14:34.918 "data_offset": 0, 00:14:34.918 "data_size": 63488 00:14:34.918 }, 00:14:34.918 { 00:14:34.918 "name": "BaseBdev3", 00:14:34.918 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:34.918 "is_configured": true, 00:14:34.918 "data_offset": 2048, 00:14:34.918 "data_size": 63488 00:14:34.918 }, 00:14:34.918 { 00:14:34.918 "name": "BaseBdev4", 00:14:34.918 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:34.918 "is_configured": true, 00:14:34.918 "data_offset": 2048, 00:14:34.918 "data_size": 63488 00:14:34.918 } 00:14:34.918 ] 00:14:34.918 }' 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.918 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.918 "name": "raid_bdev1", 00:14:34.919 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:34.919 "strip_size_kb": 0, 00:14:34.919 "state": "online", 00:14:34.919 "raid_level": "raid1", 00:14:34.919 "superblock": true, 00:14:34.919 "num_base_bdevs": 4, 00:14:34.919 "num_base_bdevs_discovered": 3, 00:14:34.919 "num_base_bdevs_operational": 3, 00:14:34.919 "base_bdevs_list": [ 00:14:34.919 { 00:14:34.919 "name": "spare", 00:14:34.919 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:34.919 "is_configured": true, 00:14:34.919 "data_offset": 2048, 00:14:34.919 "data_size": 63488 00:14:34.919 }, 00:14:34.919 { 00:14:34.919 "name": null, 00:14:34.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.919 "is_configured": false, 00:14:34.919 "data_offset": 0, 00:14:34.919 "data_size": 63488 00:14:34.919 }, 00:14:34.919 { 00:14:34.919 "name": "BaseBdev3", 00:14:34.919 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:34.919 "is_configured": true, 00:14:34.919 "data_offset": 2048, 00:14:34.919 "data_size": 63488 00:14:34.919 }, 00:14:34.919 { 00:14:34.919 "name": "BaseBdev4", 00:14:34.919 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:34.919 "is_configured": true, 00:14:34.919 "data_offset": 2048, 00:14:34.919 "data_size": 63488 00:14:34.919 } 00:14:34.919 ] 00:14:34.919 }' 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.919 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.177 81.62 IOPS, 244.88 MiB/s [2024-11-20T17:06:59.046Z] 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.177 "name": "raid_bdev1", 00:14:35.177 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:35.177 "strip_size_kb": 0, 00:14:35.177 "state": "online", 00:14:35.177 "raid_level": "raid1", 00:14:35.177 "superblock": true, 00:14:35.177 "num_base_bdevs": 4, 00:14:35.177 "num_base_bdevs_discovered": 3, 00:14:35.177 "num_base_bdevs_operational": 3, 00:14:35.177 "base_bdevs_list": [ 00:14:35.177 { 00:14:35.177 "name": "spare", 00:14:35.177 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:35.177 "is_configured": true, 00:14:35.177 "data_offset": 2048, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": null, 00:14:35.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.177 "is_configured": false, 00:14:35.177 "data_offset": 0, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": "BaseBdev3", 00:14:35.177 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:35.177 "is_configured": true, 00:14:35.177 "data_offset": 2048, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": "BaseBdev4", 00:14:35.177 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:35.177 "is_configured": true, 00:14:35.177 "data_offset": 2048, 00:14:35.177 "data_size": 63488 00:14:35.177 } 00:14:35.177 ] 00:14:35.177 }' 00:14:35.177 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.177 17:06:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.744 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.744 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.744 [2024-11-20 17:06:59.309621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.744 [2024-11-20 17:06:59.309659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.744 00:14:35.744 Latency(us) 00:14:35.744 [2024-11-20T17:06:59.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.744 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:35.745 raid_bdev1 : 8.58 78.54 235.63 0.00 0.00 17224.32 266.24 111530.36 00:14:35.745 [2024-11-20T17:06:59.614Z] =================================================================================================================== 00:14:35.745 [2024-11-20T17:06:59.614Z] Total : 78.54 235.63 0.00 0.00 17224.32 266.24 111530.36 00:14:35.745 [2024-11-20 17:06:59.379691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.745 [2024-11-20 17:06:59.379805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.745 [2024-11-20 17:06:59.379932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.745 [2024-11-20 17:06:59.379954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.745 { 00:14:35.745 "results": [ 00:14:35.745 { 00:14:35.745 "job": "raid_bdev1", 00:14:35.745 "core_mask": "0x1", 00:14:35.745 "workload": "randrw", 00:14:35.745 "percentage": 50, 00:14:35.745 "status": "finished", 00:14:35.745 "queue_depth": 2, 00:14:35.745 "io_size": 3145728, 00:14:35.745 "runtime": 8.581191, 00:14:35.745 "iops": 78.54387578600686, 00:14:35.745 "mibps": 235.63162735802058, 00:14:35.745 "io_failed": 0, 00:14:35.745 "io_timeout": 0, 00:14:35.745 "avg_latency_us": 17224.31810089021, 00:14:35.745 "min_latency_us": 266.24, 00:14:35.745 "max_latency_us": 111530.35636363637 00:14:35.745 } 00:14:35.745 ], 00:14:35.745 "core_count": 1 00:14:35.745 } 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.745 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:36.003 /dev/nbd0 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.003 1+0 records in 00:14:36.003 1+0 records out 00:14:36.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260805 s, 15.7 MB/s 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.003 17:06:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:36.261 /dev/nbd1 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.261 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.262 1+0 records in 00:14:36.262 1+0 records out 00:14:36.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551613 s, 7.4 MB/s 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.262 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.520 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.778 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:37.036 /dev/nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.036 1+0 records in 00:14:37.036 1+0 records out 00:14:37.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025535 s, 16.0 MB/s 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.036 17:07:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.294 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.294 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.294 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.294 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.294 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.295 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.553 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 [2024-11-20 17:07:01.434090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.812 [2024-11-20 17:07:01.434343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.812 [2024-11-20 17:07:01.434481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:37.812 [2024-11-20 17:07:01.434512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.812 [2024-11-20 17:07:01.437465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.812 [2024-11-20 17:07:01.437511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.812 [2024-11-20 17:07:01.437622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:37.812 [2024-11-20 17:07:01.437694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.812 [2024-11-20 17:07:01.437910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.812 [2024-11-20 17:07:01.438068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.812 spare 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 [2024-11-20 17:07:01.538236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:37.812 [2024-11-20 17:07:01.538471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.812 [2024-11-20 17:07:01.538894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:37.812 [2024-11-20 17:07:01.539270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:37.812 [2024-11-20 17:07:01.539294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:37.812 [2024-11-20 17:07:01.539565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.812 "name": "raid_bdev1", 00:14:37.812 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:37.812 "strip_size_kb": 0, 00:14:37.812 "state": "online", 00:14:37.812 "raid_level": "raid1", 00:14:37.812 "superblock": true, 00:14:37.812 "num_base_bdevs": 4, 00:14:37.812 "num_base_bdevs_discovered": 3, 00:14:37.812 "num_base_bdevs_operational": 3, 00:14:37.812 "base_bdevs_list": [ 00:14:37.812 { 00:14:37.812 "name": "spare", 00:14:37.812 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:37.812 "is_configured": true, 00:14:37.812 "data_offset": 2048, 00:14:37.812 "data_size": 63488 00:14:37.812 }, 00:14:37.812 { 00:14:37.812 "name": null, 00:14:37.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.812 "is_configured": false, 00:14:37.812 "data_offset": 2048, 00:14:37.812 "data_size": 63488 00:14:37.812 }, 00:14:37.812 { 00:14:37.812 "name": "BaseBdev3", 00:14:37.812 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:37.812 "is_configured": true, 00:14:37.812 "data_offset": 2048, 00:14:37.812 "data_size": 63488 00:14:37.812 }, 00:14:37.812 { 00:14:37.812 "name": "BaseBdev4", 00:14:37.812 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:37.812 "is_configured": true, 00:14:37.812 "data_offset": 2048, 00:14:37.812 "data_size": 63488 00:14:37.812 } 00:14:37.812 ] 00:14:37.812 }' 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.812 17:07:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.379 "name": "raid_bdev1", 00:14:38.379 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:38.379 "strip_size_kb": 0, 00:14:38.379 "state": "online", 00:14:38.379 "raid_level": "raid1", 00:14:38.379 "superblock": true, 00:14:38.379 "num_base_bdevs": 4, 00:14:38.379 "num_base_bdevs_discovered": 3, 00:14:38.379 "num_base_bdevs_operational": 3, 00:14:38.379 "base_bdevs_list": [ 00:14:38.379 { 00:14:38.379 "name": "spare", 00:14:38.379 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:38.379 "is_configured": true, 00:14:38.379 "data_offset": 2048, 00:14:38.379 "data_size": 63488 00:14:38.379 }, 00:14:38.379 { 00:14:38.379 "name": null, 00:14:38.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.379 "is_configured": false, 00:14:38.379 "data_offset": 2048, 00:14:38.379 "data_size": 63488 00:14:38.379 }, 00:14:38.379 { 00:14:38.379 "name": "BaseBdev3", 00:14:38.379 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:38.379 "is_configured": true, 00:14:38.379 "data_offset": 2048, 00:14:38.379 "data_size": 63488 00:14:38.379 }, 00:14:38.379 { 00:14:38.379 "name": "BaseBdev4", 00:14:38.379 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:38.379 "is_configured": true, 00:14:38.379 "data_offset": 2048, 00:14:38.379 "data_size": 63488 00:14:38.379 } 00:14:38.379 ] 00:14:38.379 }' 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.379 [2024-11-20 17:07:02.234608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.379 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.637 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.637 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.637 "name": "raid_bdev1", 00:14:38.637 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:38.637 "strip_size_kb": 0, 00:14:38.637 "state": "online", 00:14:38.637 "raid_level": "raid1", 00:14:38.637 "superblock": true, 00:14:38.637 "num_base_bdevs": 4, 00:14:38.637 "num_base_bdevs_discovered": 2, 00:14:38.637 "num_base_bdevs_operational": 2, 00:14:38.637 "base_bdevs_list": [ 00:14:38.637 { 00:14:38.637 "name": null, 00:14:38.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.637 "is_configured": false, 00:14:38.637 "data_offset": 0, 00:14:38.637 "data_size": 63488 00:14:38.637 }, 00:14:38.637 { 00:14:38.637 "name": null, 00:14:38.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.637 "is_configured": false, 00:14:38.637 "data_offset": 2048, 00:14:38.637 "data_size": 63488 00:14:38.637 }, 00:14:38.637 { 00:14:38.637 "name": "BaseBdev3", 00:14:38.637 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:38.637 "is_configured": true, 00:14:38.637 "data_offset": 2048, 00:14:38.637 "data_size": 63488 00:14:38.637 }, 00:14:38.637 { 00:14:38.637 "name": "BaseBdev4", 00:14:38.637 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:38.637 "is_configured": true, 00:14:38.637 "data_offset": 2048, 00:14:38.637 "data_size": 63488 00:14:38.637 } 00:14:38.637 ] 00:14:38.637 }' 00:14:38.637 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.637 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.895 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.895 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.895 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.895 [2024-11-20 17:07:02.750905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.895 [2024-11-20 17:07:02.751228] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.895 [2024-11-20 17:07:02.751254] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.895 [2024-11-20 17:07:02.751316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.155 [2024-11-20 17:07:02.765505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:39.155 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.155 17:07:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:39.155 [2024-11-20 17:07:02.768165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.090 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.090 "name": "raid_bdev1", 00:14:40.090 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:40.090 "strip_size_kb": 0, 00:14:40.090 "state": "online", 00:14:40.090 "raid_level": "raid1", 00:14:40.090 "superblock": true, 00:14:40.090 "num_base_bdevs": 4, 00:14:40.090 "num_base_bdevs_discovered": 3, 00:14:40.090 "num_base_bdevs_operational": 3, 00:14:40.090 "process": { 00:14:40.090 "type": "rebuild", 00:14:40.091 "target": "spare", 00:14:40.091 "progress": { 00:14:40.091 "blocks": 20480, 00:14:40.091 "percent": 32 00:14:40.091 } 00:14:40.091 }, 00:14:40.091 "base_bdevs_list": [ 00:14:40.091 { 00:14:40.091 "name": "spare", 00:14:40.091 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:40.091 "is_configured": true, 00:14:40.091 "data_offset": 2048, 00:14:40.091 "data_size": 63488 00:14:40.091 }, 00:14:40.091 { 00:14:40.091 "name": null, 00:14:40.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.091 "is_configured": false, 00:14:40.091 "data_offset": 2048, 00:14:40.091 "data_size": 63488 00:14:40.091 }, 00:14:40.091 { 00:14:40.091 "name": "BaseBdev3", 00:14:40.091 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:40.091 "is_configured": true, 00:14:40.091 "data_offset": 2048, 00:14:40.091 "data_size": 63488 00:14:40.091 }, 00:14:40.091 { 00:14:40.091 "name": "BaseBdev4", 00:14:40.091 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:40.091 "is_configured": true, 00:14:40.091 "data_offset": 2048, 00:14:40.091 "data_size": 63488 00:14:40.091 } 00:14:40.091 ] 00:14:40.091 }' 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.091 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.091 [2024-11-20 17:07:03.933397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.398 [2024-11-20 17:07:03.977313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.398 [2024-11-20 17:07:03.977453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.398 [2024-11-20 17:07:03.977482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.398 [2024-11-20 17:07:03.977523] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.398 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.398 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.398 17:07:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.398 "name": "raid_bdev1", 00:14:40.398 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:40.398 "strip_size_kb": 0, 00:14:40.398 "state": "online", 00:14:40.398 "raid_level": "raid1", 00:14:40.398 "superblock": true, 00:14:40.398 "num_base_bdevs": 4, 00:14:40.398 "num_base_bdevs_discovered": 2, 00:14:40.398 "num_base_bdevs_operational": 2, 00:14:40.398 "base_bdevs_list": [ 00:14:40.398 { 00:14:40.398 "name": null, 00:14:40.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.398 "is_configured": false, 00:14:40.398 "data_offset": 0, 00:14:40.398 "data_size": 63488 00:14:40.398 }, 00:14:40.398 { 00:14:40.398 "name": null, 00:14:40.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.398 "is_configured": false, 00:14:40.398 "data_offset": 2048, 00:14:40.398 "data_size": 63488 00:14:40.398 }, 00:14:40.398 { 00:14:40.398 "name": "BaseBdev3", 00:14:40.398 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:40.398 "is_configured": true, 00:14:40.398 "data_offset": 2048, 00:14:40.398 "data_size": 63488 00:14:40.398 }, 00:14:40.398 { 00:14:40.398 "name": "BaseBdev4", 00:14:40.398 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:40.398 "is_configured": true, 00:14:40.398 "data_offset": 2048, 00:14:40.398 "data_size": 63488 00:14:40.398 } 00:14:40.398 ] 00:14:40.398 }' 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.398 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.671 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.671 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.671 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.671 [2024-11-20 17:07:04.527737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.671 [2024-11-20 17:07:04.528018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.671 [2024-11-20 17:07:04.528115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:40.671 [2024-11-20 17:07:04.528142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.671 [2024-11-20 17:07:04.528753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.671 [2024-11-20 17:07:04.528820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.671 [2024-11-20 17:07:04.528945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:40.671 [2024-11-20 17:07:04.528977] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.671 [2024-11-20 17:07:04.528992] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.671 [2024-11-20 17:07:04.529043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.930 spare 00:14:40.930 [2024-11-20 17:07:04.543378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:40.930 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.930 17:07:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:40.930 [2024-11-20 17:07:04.545974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.865 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.865 "name": "raid_bdev1", 00:14:41.865 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:41.865 "strip_size_kb": 0, 00:14:41.865 "state": "online", 00:14:41.865 "raid_level": "raid1", 00:14:41.865 "superblock": true, 00:14:41.865 "num_base_bdevs": 4, 00:14:41.865 "num_base_bdevs_discovered": 3, 00:14:41.865 "num_base_bdevs_operational": 3, 00:14:41.865 "process": { 00:14:41.865 "type": "rebuild", 00:14:41.865 "target": "spare", 00:14:41.865 "progress": { 00:14:41.865 "blocks": 20480, 00:14:41.865 "percent": 32 00:14:41.865 } 00:14:41.865 }, 00:14:41.865 "base_bdevs_list": [ 00:14:41.865 { 00:14:41.865 "name": "spare", 00:14:41.866 "uuid": "2f79cd8f-e488-5d75-9203-c66f7dbe338e", 00:14:41.866 "is_configured": true, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": null, 00:14:41.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.866 "is_configured": false, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": "BaseBdev3", 00:14:41.866 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:41.866 "is_configured": true, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 }, 00:14:41.866 { 00:14:41.866 "name": "BaseBdev4", 00:14:41.866 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:41.866 "is_configured": true, 00:14:41.866 "data_offset": 2048, 00:14:41.866 "data_size": 63488 00:14:41.866 } 00:14:41.866 ] 00:14:41.866 }' 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.866 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.866 [2024-11-20 17:07:05.707521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.124 [2024-11-20 17:07:05.754620] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.124 [2024-11-20 17:07:05.754926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.124 [2024-11-20 17:07:05.754963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.124 [2024-11-20 17:07:05.754975] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.124 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.124 "name": "raid_bdev1", 00:14:42.125 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:42.125 "strip_size_kb": 0, 00:14:42.125 "state": "online", 00:14:42.125 "raid_level": "raid1", 00:14:42.125 "superblock": true, 00:14:42.125 "num_base_bdevs": 4, 00:14:42.125 "num_base_bdevs_discovered": 2, 00:14:42.125 "num_base_bdevs_operational": 2, 00:14:42.125 "base_bdevs_list": [ 00:14:42.125 { 00:14:42.125 "name": null, 00:14:42.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.125 "is_configured": false, 00:14:42.125 "data_offset": 0, 00:14:42.125 "data_size": 63488 00:14:42.125 }, 00:14:42.125 { 00:14:42.125 "name": null, 00:14:42.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.125 "is_configured": false, 00:14:42.125 "data_offset": 2048, 00:14:42.125 "data_size": 63488 00:14:42.125 }, 00:14:42.125 { 00:14:42.125 "name": "BaseBdev3", 00:14:42.125 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:42.125 "is_configured": true, 00:14:42.125 "data_offset": 2048, 00:14:42.125 "data_size": 63488 00:14:42.125 }, 00:14:42.125 { 00:14:42.125 "name": "BaseBdev4", 00:14:42.125 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:42.125 "is_configured": true, 00:14:42.125 "data_offset": 2048, 00:14:42.125 "data_size": 63488 00:14:42.125 } 00:14:42.125 ] 00:14:42.125 }' 00:14:42.125 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.125 17:07:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.693 "name": "raid_bdev1", 00:14:42.693 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:42.693 "strip_size_kb": 0, 00:14:42.693 "state": "online", 00:14:42.693 "raid_level": "raid1", 00:14:42.693 "superblock": true, 00:14:42.693 "num_base_bdevs": 4, 00:14:42.693 "num_base_bdevs_discovered": 2, 00:14:42.693 "num_base_bdevs_operational": 2, 00:14:42.693 "base_bdevs_list": [ 00:14:42.693 { 00:14:42.693 "name": null, 00:14:42.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.693 "is_configured": false, 00:14:42.693 "data_offset": 0, 00:14:42.693 "data_size": 63488 00:14:42.693 }, 00:14:42.693 { 00:14:42.693 "name": null, 00:14:42.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.693 "is_configured": false, 00:14:42.693 "data_offset": 2048, 00:14:42.693 "data_size": 63488 00:14:42.693 }, 00:14:42.693 { 00:14:42.693 "name": "BaseBdev3", 00:14:42.693 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:42.693 "is_configured": true, 00:14:42.693 "data_offset": 2048, 00:14:42.693 "data_size": 63488 00:14:42.693 }, 00:14:42.693 { 00:14:42.693 "name": "BaseBdev4", 00:14:42.693 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:42.693 "is_configured": true, 00:14:42.693 "data_offset": 2048, 00:14:42.693 "data_size": 63488 00:14:42.693 } 00:14:42.693 ] 00:14:42.693 }' 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.693 [2024-11-20 17:07:06.420199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.693 [2024-11-20 17:07:06.420406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.693 [2024-11-20 17:07:06.420518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:42.693 [2024-11-20 17:07:06.420745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.693 [2024-11-20 17:07:06.421377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.693 [2024-11-20 17:07:06.421615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.693 [2024-11-20 17:07:06.421870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:42.693 [2024-11-20 17:07:06.421996] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:42.693 [2024-11-20 17:07:06.422144] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.693 [2024-11-20 17:07:06.422331] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:42.693 BaseBdev1 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.693 17:07:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.630 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.630 "name": "raid_bdev1", 00:14:43.630 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:43.630 "strip_size_kb": 0, 00:14:43.630 "state": "online", 00:14:43.630 "raid_level": "raid1", 00:14:43.630 "superblock": true, 00:14:43.630 "num_base_bdevs": 4, 00:14:43.630 "num_base_bdevs_discovered": 2, 00:14:43.630 "num_base_bdevs_operational": 2, 00:14:43.630 "base_bdevs_list": [ 00:14:43.630 { 00:14:43.630 "name": null, 00:14:43.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.630 "is_configured": false, 00:14:43.630 "data_offset": 0, 00:14:43.630 "data_size": 63488 00:14:43.630 }, 00:14:43.631 { 00:14:43.631 "name": null, 00:14:43.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.631 "is_configured": false, 00:14:43.631 "data_offset": 2048, 00:14:43.631 "data_size": 63488 00:14:43.631 }, 00:14:43.631 { 00:14:43.631 "name": "BaseBdev3", 00:14:43.631 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:43.631 "is_configured": true, 00:14:43.631 "data_offset": 2048, 00:14:43.631 "data_size": 63488 00:14:43.631 }, 00:14:43.631 { 00:14:43.631 "name": "BaseBdev4", 00:14:43.631 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:43.631 "is_configured": true, 00:14:43.631 "data_offset": 2048, 00:14:43.631 "data_size": 63488 00:14:43.631 } 00:14:43.631 ] 00:14:43.631 }' 00:14:43.631 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.631 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.198 "name": "raid_bdev1", 00:14:44.198 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:44.198 "strip_size_kb": 0, 00:14:44.198 "state": "online", 00:14:44.198 "raid_level": "raid1", 00:14:44.198 "superblock": true, 00:14:44.198 "num_base_bdevs": 4, 00:14:44.198 "num_base_bdevs_discovered": 2, 00:14:44.198 "num_base_bdevs_operational": 2, 00:14:44.198 "base_bdevs_list": [ 00:14:44.198 { 00:14:44.198 "name": null, 00:14:44.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.198 "is_configured": false, 00:14:44.198 "data_offset": 0, 00:14:44.198 "data_size": 63488 00:14:44.198 }, 00:14:44.198 { 00:14:44.198 "name": null, 00:14:44.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.198 "is_configured": false, 00:14:44.198 "data_offset": 2048, 00:14:44.198 "data_size": 63488 00:14:44.198 }, 00:14:44.198 { 00:14:44.198 "name": "BaseBdev3", 00:14:44.198 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:44.198 "is_configured": true, 00:14:44.198 "data_offset": 2048, 00:14:44.198 "data_size": 63488 00:14:44.198 }, 00:14:44.198 { 00:14:44.198 "name": "BaseBdev4", 00:14:44.198 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:44.198 "is_configured": true, 00:14:44.198 "data_offset": 2048, 00:14:44.198 "data_size": 63488 00:14:44.198 } 00:14:44.198 ] 00:14:44.198 }' 00:14:44.198 17:07:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.198 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.198 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:44.456 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.457 [2024-11-20 17:07:08.097016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.457 [2024-11-20 17:07:08.097382] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:44.457 [2024-11-20 17:07:08.097429] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.457 request: 00:14:44.457 { 00:14:44.457 "base_bdev": "BaseBdev1", 00:14:44.457 "raid_bdev": "raid_bdev1", 00:14:44.457 "method": "bdev_raid_add_base_bdev", 00:14:44.457 "req_id": 1 00:14:44.457 } 00:14:44.457 Got JSON-RPC error response 00:14:44.457 response: 00:14:44.457 { 00:14:44.457 "code": -22, 00:14:44.457 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:44.457 } 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.457 17:07:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.391 "name": "raid_bdev1", 00:14:45.391 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:45.391 "strip_size_kb": 0, 00:14:45.391 "state": "online", 00:14:45.391 "raid_level": "raid1", 00:14:45.391 "superblock": true, 00:14:45.391 "num_base_bdevs": 4, 00:14:45.391 "num_base_bdevs_discovered": 2, 00:14:45.391 "num_base_bdevs_operational": 2, 00:14:45.391 "base_bdevs_list": [ 00:14:45.391 { 00:14:45.391 "name": null, 00:14:45.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.391 "is_configured": false, 00:14:45.391 "data_offset": 0, 00:14:45.391 "data_size": 63488 00:14:45.391 }, 00:14:45.391 { 00:14:45.391 "name": null, 00:14:45.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.391 "is_configured": false, 00:14:45.391 "data_offset": 2048, 00:14:45.391 "data_size": 63488 00:14:45.391 }, 00:14:45.391 { 00:14:45.391 "name": "BaseBdev3", 00:14:45.391 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:45.391 "is_configured": true, 00:14:45.391 "data_offset": 2048, 00:14:45.391 "data_size": 63488 00:14:45.391 }, 00:14:45.391 { 00:14:45.391 "name": "BaseBdev4", 00:14:45.391 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:45.391 "is_configured": true, 00:14:45.391 "data_offset": 2048, 00:14:45.391 "data_size": 63488 00:14:45.391 } 00:14:45.391 ] 00:14:45.391 }' 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.391 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.957 "name": "raid_bdev1", 00:14:45.957 "uuid": "c0bd369d-07ac-4fbd-9a97-b958a79a72ba", 00:14:45.957 "strip_size_kb": 0, 00:14:45.957 "state": "online", 00:14:45.957 "raid_level": "raid1", 00:14:45.957 "superblock": true, 00:14:45.957 "num_base_bdevs": 4, 00:14:45.957 "num_base_bdevs_discovered": 2, 00:14:45.957 "num_base_bdevs_operational": 2, 00:14:45.957 "base_bdevs_list": [ 00:14:45.957 { 00:14:45.957 "name": null, 00:14:45.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.957 "is_configured": false, 00:14:45.957 "data_offset": 0, 00:14:45.957 "data_size": 63488 00:14:45.957 }, 00:14:45.957 { 00:14:45.957 "name": null, 00:14:45.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.957 "is_configured": false, 00:14:45.957 "data_offset": 2048, 00:14:45.957 "data_size": 63488 00:14:45.957 }, 00:14:45.957 { 00:14:45.957 "name": "BaseBdev3", 00:14:45.957 "uuid": "0e1d337d-e456-5226-a520-2bdc002d43c4", 00:14:45.957 "is_configured": true, 00:14:45.957 "data_offset": 2048, 00:14:45.957 "data_size": 63488 00:14:45.957 }, 00:14:45.957 { 00:14:45.957 "name": "BaseBdev4", 00:14:45.957 "uuid": "47ec7dc6-c654-5198-9e36-44c123682fde", 00:14:45.957 "is_configured": true, 00:14:45.957 "data_offset": 2048, 00:14:45.957 "data_size": 63488 00:14:45.957 } 00:14:45.957 ] 00:14:45.957 }' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79269 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79269 ']' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79269 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.957 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79269 00:14:45.957 killing process with pid 79269 00:14:45.957 Received shutdown signal, test time was about 19.026162 seconds 00:14:45.957 00:14:45.957 Latency(us) 00:14:45.957 [2024-11-20T17:07:09.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.958 [2024-11-20T17:07:09.827Z] =================================================================================================================== 00:14:45.958 [2024-11-20T17:07:09.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.958 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.958 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.958 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79269' 00:14:45.958 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79269 00:14:45.958 [2024-11-20 17:07:09.806478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.958 17:07:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79269 00:14:45.958 [2024-11-20 17:07:09.806617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.958 [2024-11-20 17:07:09.806712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.958 [2024-11-20 17:07:09.806731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:46.525 [2024-11-20 17:07:10.140449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.462 17:07:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.462 ************************************ 00:14:47.462 END TEST raid_rebuild_test_sb_io 00:14:47.462 ************************************ 00:14:47.462 00:14:47.462 real 0m22.470s 00:14:47.462 user 0m30.587s 00:14:47.462 sys 0m2.273s 00:14:47.462 17:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.462 17:07:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.462 17:07:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:47.462 17:07:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:47.462 17:07:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:47.462 17:07:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.462 17:07:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.462 ************************************ 00:14:47.462 START TEST raid5f_state_function_test 00:14:47.462 ************************************ 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:47.463 Process raid pid: 79997 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79997 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79997' 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79997 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79997 ']' 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.463 17:07:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.728 [2024-11-20 17:07:11.344841] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:14:47.728 [2024-11-20 17:07:11.345320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.728 [2024-11-20 17:07:11.534840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.986 [2024-11-20 17:07:11.669040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.245 [2024-11-20 17:07:11.872639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.245 [2024-11-20 17:07:11.872952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.504 [2024-11-20 17:07:12.360134] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.504 [2024-11-20 17:07:12.360358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.504 [2024-11-20 17:07:12.360503] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.504 [2024-11-20 17:07:12.360562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.504 [2024-11-20 17:07:12.360599] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.504 [2024-11-20 17:07:12.360614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.504 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.763 "name": "Existed_Raid", 00:14:48.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.763 "strip_size_kb": 64, 00:14:48.763 "state": "configuring", 00:14:48.763 "raid_level": "raid5f", 00:14:48.763 "superblock": false, 00:14:48.763 "num_base_bdevs": 3, 00:14:48.763 "num_base_bdevs_discovered": 0, 00:14:48.763 "num_base_bdevs_operational": 3, 00:14:48.763 "base_bdevs_list": [ 00:14:48.763 { 00:14:48.763 "name": "BaseBdev1", 00:14:48.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.763 "is_configured": false, 00:14:48.763 "data_offset": 0, 00:14:48.763 "data_size": 0 00:14:48.763 }, 00:14:48.763 { 00:14:48.763 "name": "BaseBdev2", 00:14:48.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.763 "is_configured": false, 00:14:48.763 "data_offset": 0, 00:14:48.763 "data_size": 0 00:14:48.763 }, 00:14:48.763 { 00:14:48.763 "name": "BaseBdev3", 00:14:48.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.763 "is_configured": false, 00:14:48.763 "data_offset": 0, 00:14:48.763 "data_size": 0 00:14:48.763 } 00:14:48.763 ] 00:14:48.763 }' 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.763 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.021 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.021 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 [2024-11-20 17:07:12.872184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.022 [2024-11-20 17:07:12.872359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.022 [2024-11-20 17:07:12.884174] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.022 [2024-11-20 17:07:12.884338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.022 [2024-11-20 17:07:12.884471] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.022 [2024-11-20 17:07:12.884505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.022 [2024-11-20 17:07:12.884517] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.022 [2024-11-20 17:07:12.884533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.022 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.281 [2024-11-20 17:07:12.929808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.281 BaseBdev1 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.281 [ 00:14:49.281 { 00:14:49.281 "name": "BaseBdev1", 00:14:49.281 "aliases": [ 00:14:49.281 "99889b1b-72d8-4e82-ba3e-d2970b6b56c4" 00:14:49.281 ], 00:14:49.281 "product_name": "Malloc disk", 00:14:49.281 "block_size": 512, 00:14:49.281 "num_blocks": 65536, 00:14:49.281 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:49.281 "assigned_rate_limits": { 00:14:49.281 "rw_ios_per_sec": 0, 00:14:49.281 "rw_mbytes_per_sec": 0, 00:14:49.281 "r_mbytes_per_sec": 0, 00:14:49.281 "w_mbytes_per_sec": 0 00:14:49.281 }, 00:14:49.281 "claimed": true, 00:14:49.281 "claim_type": "exclusive_write", 00:14:49.281 "zoned": false, 00:14:49.281 "supported_io_types": { 00:14:49.281 "read": true, 00:14:49.281 "write": true, 00:14:49.281 "unmap": true, 00:14:49.281 "flush": true, 00:14:49.281 "reset": true, 00:14:49.281 "nvme_admin": false, 00:14:49.281 "nvme_io": false, 00:14:49.281 "nvme_io_md": false, 00:14:49.281 "write_zeroes": true, 00:14:49.281 "zcopy": true, 00:14:49.281 "get_zone_info": false, 00:14:49.281 "zone_management": false, 00:14:49.281 "zone_append": false, 00:14:49.281 "compare": false, 00:14:49.281 "compare_and_write": false, 00:14:49.281 "abort": true, 00:14:49.281 "seek_hole": false, 00:14:49.281 "seek_data": false, 00:14:49.281 "copy": true, 00:14:49.281 "nvme_iov_md": false 00:14:49.281 }, 00:14:49.281 "memory_domains": [ 00:14:49.281 { 00:14:49.281 "dma_device_id": "system", 00:14:49.281 "dma_device_type": 1 00:14:49.281 }, 00:14:49.281 { 00:14:49.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.281 "dma_device_type": 2 00:14:49.281 } 00:14:49.281 ], 00:14:49.281 "driver_specific": {} 00:14:49.281 } 00:14:49.281 ] 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.281 17:07:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.281 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.281 "name": "Existed_Raid", 00:14:49.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.281 "strip_size_kb": 64, 00:14:49.281 "state": "configuring", 00:14:49.281 "raid_level": "raid5f", 00:14:49.281 "superblock": false, 00:14:49.281 "num_base_bdevs": 3, 00:14:49.281 "num_base_bdevs_discovered": 1, 00:14:49.281 "num_base_bdevs_operational": 3, 00:14:49.281 "base_bdevs_list": [ 00:14:49.281 { 00:14:49.281 "name": "BaseBdev1", 00:14:49.281 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:49.281 "is_configured": true, 00:14:49.281 "data_offset": 0, 00:14:49.281 "data_size": 65536 00:14:49.281 }, 00:14:49.281 { 00:14:49.281 "name": "BaseBdev2", 00:14:49.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.281 "is_configured": false, 00:14:49.281 "data_offset": 0, 00:14:49.281 "data_size": 0 00:14:49.281 }, 00:14:49.281 { 00:14:49.281 "name": "BaseBdev3", 00:14:49.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.281 "is_configured": false, 00:14:49.281 "data_offset": 0, 00:14:49.281 "data_size": 0 00:14:49.281 } 00:14:49.281 ] 00:14:49.281 }' 00:14:49.281 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.281 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.848 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.848 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.848 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.848 [2024-11-20 17:07:13.502095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.848 [2024-11-20 17:07:13.502278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:49.848 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.849 [2024-11-20 17:07:13.510115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.849 [2024-11-20 17:07:13.512820] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.849 [2024-11-20 17:07:13.513021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.849 [2024-11-20 17:07:13.513048] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.849 [2024-11-20 17:07:13.513066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.849 "name": "Existed_Raid", 00:14:49.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.849 "strip_size_kb": 64, 00:14:49.849 "state": "configuring", 00:14:49.849 "raid_level": "raid5f", 00:14:49.849 "superblock": false, 00:14:49.849 "num_base_bdevs": 3, 00:14:49.849 "num_base_bdevs_discovered": 1, 00:14:49.849 "num_base_bdevs_operational": 3, 00:14:49.849 "base_bdevs_list": [ 00:14:49.849 { 00:14:49.849 "name": "BaseBdev1", 00:14:49.849 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:49.849 "is_configured": true, 00:14:49.849 "data_offset": 0, 00:14:49.849 "data_size": 65536 00:14:49.849 }, 00:14:49.849 { 00:14:49.849 "name": "BaseBdev2", 00:14:49.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.849 "is_configured": false, 00:14:49.849 "data_offset": 0, 00:14:49.849 "data_size": 0 00:14:49.849 }, 00:14:49.849 { 00:14:49.849 "name": "BaseBdev3", 00:14:49.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.849 "is_configured": false, 00:14:49.849 "data_offset": 0, 00:14:49.849 "data_size": 0 00:14:49.849 } 00:14:49.849 ] 00:14:49.849 }' 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.849 17:07:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.416 [2024-11-20 17:07:14.095666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.416 BaseBdev2 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.416 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.416 [ 00:14:50.416 { 00:14:50.416 "name": "BaseBdev2", 00:14:50.416 "aliases": [ 00:14:50.416 "11804127-1c2e-48a6-b336-59a5afc8ed6a" 00:14:50.416 ], 00:14:50.416 "product_name": "Malloc disk", 00:14:50.416 "block_size": 512, 00:14:50.416 "num_blocks": 65536, 00:14:50.416 "uuid": "11804127-1c2e-48a6-b336-59a5afc8ed6a", 00:14:50.416 "assigned_rate_limits": { 00:14:50.416 "rw_ios_per_sec": 0, 00:14:50.416 "rw_mbytes_per_sec": 0, 00:14:50.416 "r_mbytes_per_sec": 0, 00:14:50.416 "w_mbytes_per_sec": 0 00:14:50.416 }, 00:14:50.416 "claimed": true, 00:14:50.416 "claim_type": "exclusive_write", 00:14:50.416 "zoned": false, 00:14:50.416 "supported_io_types": { 00:14:50.416 "read": true, 00:14:50.416 "write": true, 00:14:50.416 "unmap": true, 00:14:50.416 "flush": true, 00:14:50.416 "reset": true, 00:14:50.416 "nvme_admin": false, 00:14:50.416 "nvme_io": false, 00:14:50.416 "nvme_io_md": false, 00:14:50.416 "write_zeroes": true, 00:14:50.416 "zcopy": true, 00:14:50.416 "get_zone_info": false, 00:14:50.416 "zone_management": false, 00:14:50.416 "zone_append": false, 00:14:50.416 "compare": false, 00:14:50.416 "compare_and_write": false, 00:14:50.416 "abort": true, 00:14:50.416 "seek_hole": false, 00:14:50.416 "seek_data": false, 00:14:50.416 "copy": true, 00:14:50.416 "nvme_iov_md": false 00:14:50.416 }, 00:14:50.416 "memory_domains": [ 00:14:50.416 { 00:14:50.416 "dma_device_id": "system", 00:14:50.416 "dma_device_type": 1 00:14:50.416 }, 00:14:50.416 { 00:14:50.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.417 "dma_device_type": 2 00:14:50.417 } 00:14:50.417 ], 00:14:50.417 "driver_specific": {} 00:14:50.417 } 00:14:50.417 ] 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.417 "name": "Existed_Raid", 00:14:50.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.417 "strip_size_kb": 64, 00:14:50.417 "state": "configuring", 00:14:50.417 "raid_level": "raid5f", 00:14:50.417 "superblock": false, 00:14:50.417 "num_base_bdevs": 3, 00:14:50.417 "num_base_bdevs_discovered": 2, 00:14:50.417 "num_base_bdevs_operational": 3, 00:14:50.417 "base_bdevs_list": [ 00:14:50.417 { 00:14:50.417 "name": "BaseBdev1", 00:14:50.417 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:50.417 "is_configured": true, 00:14:50.417 "data_offset": 0, 00:14:50.417 "data_size": 65536 00:14:50.417 }, 00:14:50.417 { 00:14:50.417 "name": "BaseBdev2", 00:14:50.417 "uuid": "11804127-1c2e-48a6-b336-59a5afc8ed6a", 00:14:50.417 "is_configured": true, 00:14:50.417 "data_offset": 0, 00:14:50.417 "data_size": 65536 00:14:50.417 }, 00:14:50.417 { 00:14:50.417 "name": "BaseBdev3", 00:14:50.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.417 "is_configured": false, 00:14:50.417 "data_offset": 0, 00:14:50.417 "data_size": 0 00:14:50.417 } 00:14:50.417 ] 00:14:50.417 }' 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.417 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 [2024-11-20 17:07:14.707618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.985 [2024-11-20 17:07:14.707707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:50.985 [2024-11-20 17:07:14.707734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:50.985 [2024-11-20 17:07:14.708191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:50.985 BaseBdev3 00:14:50.985 [2024-11-20 17:07:14.713423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:50.985 [2024-11-20 17:07:14.713462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:50.985 [2024-11-20 17:07:14.713856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 [ 00:14:50.985 { 00:14:50.985 "name": "BaseBdev3", 00:14:50.985 "aliases": [ 00:14:50.985 "6ebd47a3-3caf-4d14-8a6d-de35d4cfa66c" 00:14:50.985 ], 00:14:50.985 "product_name": "Malloc disk", 00:14:50.985 "block_size": 512, 00:14:50.985 "num_blocks": 65536, 00:14:50.985 "uuid": "6ebd47a3-3caf-4d14-8a6d-de35d4cfa66c", 00:14:50.985 "assigned_rate_limits": { 00:14:50.985 "rw_ios_per_sec": 0, 00:14:50.985 "rw_mbytes_per_sec": 0, 00:14:50.985 "r_mbytes_per_sec": 0, 00:14:50.985 "w_mbytes_per_sec": 0 00:14:50.985 }, 00:14:50.985 "claimed": true, 00:14:50.985 "claim_type": "exclusive_write", 00:14:50.985 "zoned": false, 00:14:50.985 "supported_io_types": { 00:14:50.985 "read": true, 00:14:50.985 "write": true, 00:14:50.985 "unmap": true, 00:14:50.985 "flush": true, 00:14:50.985 "reset": true, 00:14:50.985 "nvme_admin": false, 00:14:50.985 "nvme_io": false, 00:14:50.985 "nvme_io_md": false, 00:14:50.985 "write_zeroes": true, 00:14:50.985 "zcopy": true, 00:14:50.985 "get_zone_info": false, 00:14:50.985 "zone_management": false, 00:14:50.985 "zone_append": false, 00:14:50.985 "compare": false, 00:14:50.985 "compare_and_write": false, 00:14:50.985 "abort": true, 00:14:50.985 "seek_hole": false, 00:14:50.985 "seek_data": false, 00:14:50.985 "copy": true, 00:14:50.985 "nvme_iov_md": false 00:14:50.985 }, 00:14:50.985 "memory_domains": [ 00:14:50.985 { 00:14:50.985 "dma_device_id": "system", 00:14:50.985 "dma_device_type": 1 00:14:50.985 }, 00:14:50.985 { 00:14:50.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.985 "dma_device_type": 2 00:14:50.985 } 00:14:50.985 ], 00:14:50.985 "driver_specific": {} 00:14:50.985 } 00:14:50.985 ] 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.985 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.986 "name": "Existed_Raid", 00:14:50.986 "uuid": "70ed2e3f-c798-4b48-95b6-cb24a593eec5", 00:14:50.986 "strip_size_kb": 64, 00:14:50.986 "state": "online", 00:14:50.986 "raid_level": "raid5f", 00:14:50.986 "superblock": false, 00:14:50.986 "num_base_bdevs": 3, 00:14:50.986 "num_base_bdevs_discovered": 3, 00:14:50.986 "num_base_bdevs_operational": 3, 00:14:50.986 "base_bdevs_list": [ 00:14:50.986 { 00:14:50.986 "name": "BaseBdev1", 00:14:50.986 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:50.986 "is_configured": true, 00:14:50.986 "data_offset": 0, 00:14:50.986 "data_size": 65536 00:14:50.986 }, 00:14:50.986 { 00:14:50.986 "name": "BaseBdev2", 00:14:50.986 "uuid": "11804127-1c2e-48a6-b336-59a5afc8ed6a", 00:14:50.986 "is_configured": true, 00:14:50.986 "data_offset": 0, 00:14:50.986 "data_size": 65536 00:14:50.986 }, 00:14:50.986 { 00:14:50.986 "name": "BaseBdev3", 00:14:50.986 "uuid": "6ebd47a3-3caf-4d14-8a6d-de35d4cfa66c", 00:14:50.986 "is_configured": true, 00:14:50.986 "data_offset": 0, 00:14:50.986 "data_size": 65536 00:14:50.986 } 00:14:50.986 ] 00:14:50.986 }' 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.986 17:07:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.552 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:51.552 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:51.552 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:51.552 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:51.552 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.553 [2024-11-20 17:07:15.271994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:51.553 "name": "Existed_Raid", 00:14:51.553 "aliases": [ 00:14:51.553 "70ed2e3f-c798-4b48-95b6-cb24a593eec5" 00:14:51.553 ], 00:14:51.553 "product_name": "Raid Volume", 00:14:51.553 "block_size": 512, 00:14:51.553 "num_blocks": 131072, 00:14:51.553 "uuid": "70ed2e3f-c798-4b48-95b6-cb24a593eec5", 00:14:51.553 "assigned_rate_limits": { 00:14:51.553 "rw_ios_per_sec": 0, 00:14:51.553 "rw_mbytes_per_sec": 0, 00:14:51.553 "r_mbytes_per_sec": 0, 00:14:51.553 "w_mbytes_per_sec": 0 00:14:51.553 }, 00:14:51.553 "claimed": false, 00:14:51.553 "zoned": false, 00:14:51.553 "supported_io_types": { 00:14:51.553 "read": true, 00:14:51.553 "write": true, 00:14:51.553 "unmap": false, 00:14:51.553 "flush": false, 00:14:51.553 "reset": true, 00:14:51.553 "nvme_admin": false, 00:14:51.553 "nvme_io": false, 00:14:51.553 "nvme_io_md": false, 00:14:51.553 "write_zeroes": true, 00:14:51.553 "zcopy": false, 00:14:51.553 "get_zone_info": false, 00:14:51.553 "zone_management": false, 00:14:51.553 "zone_append": false, 00:14:51.553 "compare": false, 00:14:51.553 "compare_and_write": false, 00:14:51.553 "abort": false, 00:14:51.553 "seek_hole": false, 00:14:51.553 "seek_data": false, 00:14:51.553 "copy": false, 00:14:51.553 "nvme_iov_md": false 00:14:51.553 }, 00:14:51.553 "driver_specific": { 00:14:51.553 "raid": { 00:14:51.553 "uuid": "70ed2e3f-c798-4b48-95b6-cb24a593eec5", 00:14:51.553 "strip_size_kb": 64, 00:14:51.553 "state": "online", 00:14:51.553 "raid_level": "raid5f", 00:14:51.553 "superblock": false, 00:14:51.553 "num_base_bdevs": 3, 00:14:51.553 "num_base_bdevs_discovered": 3, 00:14:51.553 "num_base_bdevs_operational": 3, 00:14:51.553 "base_bdevs_list": [ 00:14:51.553 { 00:14:51.553 "name": "BaseBdev1", 00:14:51.553 "uuid": "99889b1b-72d8-4e82-ba3e-d2970b6b56c4", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 0, 00:14:51.553 "data_size": 65536 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "name": "BaseBdev2", 00:14:51.553 "uuid": "11804127-1c2e-48a6-b336-59a5afc8ed6a", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 0, 00:14:51.553 "data_size": 65536 00:14:51.553 }, 00:14:51.553 { 00:14:51.553 "name": "BaseBdev3", 00:14:51.553 "uuid": "6ebd47a3-3caf-4d14-8a6d-de35d4cfa66c", 00:14:51.553 "is_configured": true, 00:14:51.553 "data_offset": 0, 00:14:51.553 "data_size": 65536 00:14:51.553 } 00:14:51.553 ] 00:14:51.553 } 00:14:51.553 } 00:14:51.553 }' 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:51.553 BaseBdev2 00:14:51.553 BaseBdev3' 00:14:51.553 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.811 [2024-11-20 17:07:15.599832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.811 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:52.070 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.071 "name": "Existed_Raid", 00:14:52.071 "uuid": "70ed2e3f-c798-4b48-95b6-cb24a593eec5", 00:14:52.071 "strip_size_kb": 64, 00:14:52.071 "state": "online", 00:14:52.071 "raid_level": "raid5f", 00:14:52.071 "superblock": false, 00:14:52.071 "num_base_bdevs": 3, 00:14:52.071 "num_base_bdevs_discovered": 2, 00:14:52.071 "num_base_bdevs_operational": 2, 00:14:52.071 "base_bdevs_list": [ 00:14:52.071 { 00:14:52.071 "name": null, 00:14:52.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.071 "is_configured": false, 00:14:52.071 "data_offset": 0, 00:14:52.071 "data_size": 65536 00:14:52.071 }, 00:14:52.071 { 00:14:52.071 "name": "BaseBdev2", 00:14:52.071 "uuid": "11804127-1c2e-48a6-b336-59a5afc8ed6a", 00:14:52.071 "is_configured": true, 00:14:52.071 "data_offset": 0, 00:14:52.071 "data_size": 65536 00:14:52.071 }, 00:14:52.071 { 00:14:52.071 "name": "BaseBdev3", 00:14:52.071 "uuid": "6ebd47a3-3caf-4d14-8a6d-de35d4cfa66c", 00:14:52.071 "is_configured": true, 00:14:52.071 "data_offset": 0, 00:14:52.071 "data_size": 65536 00:14:52.071 } 00:14:52.071 ] 00:14:52.071 }' 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.071 17:07:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 [2024-11-20 17:07:16.305421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.638 [2024-11-20 17:07:16.305714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.638 [2024-11-20 17:07:16.388180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.638 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 [2024-11-20 17:07:16.452269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.638 [2024-11-20 17:07:16.452488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.897 BaseBdev2 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.897 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.897 [ 00:14:52.897 { 00:14:52.897 "name": "BaseBdev2", 00:14:52.897 "aliases": [ 00:14:52.897 "f83bc16b-2f7a-42cb-8b7d-6e302a021a99" 00:14:52.897 ], 00:14:52.897 "product_name": "Malloc disk", 00:14:52.897 "block_size": 512, 00:14:52.897 "num_blocks": 65536, 00:14:52.897 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:52.897 "assigned_rate_limits": { 00:14:52.897 "rw_ios_per_sec": 0, 00:14:52.897 "rw_mbytes_per_sec": 0, 00:14:52.897 "r_mbytes_per_sec": 0, 00:14:52.897 "w_mbytes_per_sec": 0 00:14:52.897 }, 00:14:52.897 "claimed": false, 00:14:52.897 "zoned": false, 00:14:52.897 "supported_io_types": { 00:14:52.897 "read": true, 00:14:52.897 "write": true, 00:14:52.897 "unmap": true, 00:14:52.897 "flush": true, 00:14:52.897 "reset": true, 00:14:52.897 "nvme_admin": false, 00:14:52.897 "nvme_io": false, 00:14:52.897 "nvme_io_md": false, 00:14:52.897 "write_zeroes": true, 00:14:52.897 "zcopy": true, 00:14:52.897 "get_zone_info": false, 00:14:52.897 "zone_management": false, 00:14:52.897 "zone_append": false, 00:14:52.897 "compare": false, 00:14:52.897 "compare_and_write": false, 00:14:52.897 "abort": true, 00:14:52.897 "seek_hole": false, 00:14:52.897 "seek_data": false, 00:14:52.897 "copy": true, 00:14:52.897 "nvme_iov_md": false 00:14:52.897 }, 00:14:52.897 "memory_domains": [ 00:14:52.897 { 00:14:52.897 "dma_device_id": "system", 00:14:52.897 "dma_device_type": 1 00:14:52.898 }, 00:14:52.898 { 00:14:52.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.898 "dma_device_type": 2 00:14:52.898 } 00:14:52.898 ], 00:14:52.898 "driver_specific": {} 00:14:52.898 } 00:14:52.898 ] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 BaseBdev3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 [ 00:14:52.898 { 00:14:52.898 "name": "BaseBdev3", 00:14:52.898 "aliases": [ 00:14:52.898 "71a0560c-d37c-422f-99c3-e844996b47a7" 00:14:52.898 ], 00:14:52.898 "product_name": "Malloc disk", 00:14:52.898 "block_size": 512, 00:14:52.898 "num_blocks": 65536, 00:14:52.898 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:52.898 "assigned_rate_limits": { 00:14:52.898 "rw_ios_per_sec": 0, 00:14:52.898 "rw_mbytes_per_sec": 0, 00:14:52.898 "r_mbytes_per_sec": 0, 00:14:52.898 "w_mbytes_per_sec": 0 00:14:52.898 }, 00:14:52.898 "claimed": false, 00:14:52.898 "zoned": false, 00:14:52.898 "supported_io_types": { 00:14:52.898 "read": true, 00:14:52.898 "write": true, 00:14:52.898 "unmap": true, 00:14:52.898 "flush": true, 00:14:52.898 "reset": true, 00:14:52.898 "nvme_admin": false, 00:14:52.898 "nvme_io": false, 00:14:52.898 "nvme_io_md": false, 00:14:52.898 "write_zeroes": true, 00:14:52.898 "zcopy": true, 00:14:52.898 "get_zone_info": false, 00:14:52.898 "zone_management": false, 00:14:52.898 "zone_append": false, 00:14:52.898 "compare": false, 00:14:52.898 "compare_and_write": false, 00:14:52.898 "abort": true, 00:14:52.898 "seek_hole": false, 00:14:52.898 "seek_data": false, 00:14:52.898 "copy": true, 00:14:52.898 "nvme_iov_md": false 00:14:52.898 }, 00:14:52.898 "memory_domains": [ 00:14:52.898 { 00:14:52.898 "dma_device_id": "system", 00:14:52.898 "dma_device_type": 1 00:14:52.898 }, 00:14:52.898 { 00:14:52.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.898 "dma_device_type": 2 00:14:52.898 } 00:14:52.898 ], 00:14:52.898 "driver_specific": {} 00:14:52.898 } 00:14:52.898 ] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.898 [2024-11-20 17:07:16.746582] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.898 [2024-11-20 17:07:16.746631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.898 [2024-11-20 17:07:16.746658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.898 [2024-11-20 17:07:16.749260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.898 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.157 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.158 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.158 "name": "Existed_Raid", 00:14:53.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.158 "strip_size_kb": 64, 00:14:53.158 "state": "configuring", 00:14:53.158 "raid_level": "raid5f", 00:14:53.158 "superblock": false, 00:14:53.158 "num_base_bdevs": 3, 00:14:53.158 "num_base_bdevs_discovered": 2, 00:14:53.158 "num_base_bdevs_operational": 3, 00:14:53.158 "base_bdevs_list": [ 00:14:53.158 { 00:14:53.158 "name": "BaseBdev1", 00:14:53.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.158 "is_configured": false, 00:14:53.158 "data_offset": 0, 00:14:53.158 "data_size": 0 00:14:53.158 }, 00:14:53.158 { 00:14:53.158 "name": "BaseBdev2", 00:14:53.158 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:53.158 "is_configured": true, 00:14:53.158 "data_offset": 0, 00:14:53.158 "data_size": 65536 00:14:53.158 }, 00:14:53.158 { 00:14:53.158 "name": "BaseBdev3", 00:14:53.158 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:53.158 "is_configured": true, 00:14:53.158 "data_offset": 0, 00:14:53.158 "data_size": 65536 00:14:53.158 } 00:14:53.158 ] 00:14:53.158 }' 00:14:53.158 17:07:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.158 17:07:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.416 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.416 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.416 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.675 [2024-11-20 17:07:17.286806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.675 "name": "Existed_Raid", 00:14:53.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.675 "strip_size_kb": 64, 00:14:53.675 "state": "configuring", 00:14:53.675 "raid_level": "raid5f", 00:14:53.675 "superblock": false, 00:14:53.675 "num_base_bdevs": 3, 00:14:53.675 "num_base_bdevs_discovered": 1, 00:14:53.675 "num_base_bdevs_operational": 3, 00:14:53.675 "base_bdevs_list": [ 00:14:53.675 { 00:14:53.675 "name": "BaseBdev1", 00:14:53.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.675 "is_configured": false, 00:14:53.675 "data_offset": 0, 00:14:53.675 "data_size": 0 00:14:53.675 }, 00:14:53.675 { 00:14:53.675 "name": null, 00:14:53.675 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:53.675 "is_configured": false, 00:14:53.675 "data_offset": 0, 00:14:53.675 "data_size": 65536 00:14:53.675 }, 00:14:53.675 { 00:14:53.675 "name": "BaseBdev3", 00:14:53.675 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:53.675 "is_configured": true, 00:14:53.675 "data_offset": 0, 00:14:53.675 "data_size": 65536 00:14:53.675 } 00:14:53.675 ] 00:14:53.675 }' 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.675 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 [2024-11-20 17:07:17.912542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.243 BaseBdev1 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.243 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.243 [ 00:14:54.243 { 00:14:54.243 "name": "BaseBdev1", 00:14:54.243 "aliases": [ 00:14:54.243 "8d60f32b-85c5-45a3-96fd-7dc493dc9d96" 00:14:54.243 ], 00:14:54.244 "product_name": "Malloc disk", 00:14:54.244 "block_size": 512, 00:14:54.244 "num_blocks": 65536, 00:14:54.244 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:54.244 "assigned_rate_limits": { 00:14:54.244 "rw_ios_per_sec": 0, 00:14:54.244 "rw_mbytes_per_sec": 0, 00:14:54.244 "r_mbytes_per_sec": 0, 00:14:54.244 "w_mbytes_per_sec": 0 00:14:54.244 }, 00:14:54.244 "claimed": true, 00:14:54.244 "claim_type": "exclusive_write", 00:14:54.244 "zoned": false, 00:14:54.244 "supported_io_types": { 00:14:54.244 "read": true, 00:14:54.244 "write": true, 00:14:54.244 "unmap": true, 00:14:54.244 "flush": true, 00:14:54.244 "reset": true, 00:14:54.244 "nvme_admin": false, 00:14:54.244 "nvme_io": false, 00:14:54.244 "nvme_io_md": false, 00:14:54.244 "write_zeroes": true, 00:14:54.244 "zcopy": true, 00:14:54.244 "get_zone_info": false, 00:14:54.244 "zone_management": false, 00:14:54.244 "zone_append": false, 00:14:54.244 "compare": false, 00:14:54.244 "compare_and_write": false, 00:14:54.244 "abort": true, 00:14:54.244 "seek_hole": false, 00:14:54.244 "seek_data": false, 00:14:54.244 "copy": true, 00:14:54.244 "nvme_iov_md": false 00:14:54.244 }, 00:14:54.244 "memory_domains": [ 00:14:54.244 { 00:14:54.244 "dma_device_id": "system", 00:14:54.244 "dma_device_type": 1 00:14:54.244 }, 00:14:54.244 { 00:14:54.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.244 "dma_device_type": 2 00:14:54.244 } 00:14:54.244 ], 00:14:54.244 "driver_specific": {} 00:14:54.244 } 00:14:54.244 ] 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.244 "name": "Existed_Raid", 00:14:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.244 "strip_size_kb": 64, 00:14:54.244 "state": "configuring", 00:14:54.244 "raid_level": "raid5f", 00:14:54.244 "superblock": false, 00:14:54.244 "num_base_bdevs": 3, 00:14:54.244 "num_base_bdevs_discovered": 2, 00:14:54.244 "num_base_bdevs_operational": 3, 00:14:54.244 "base_bdevs_list": [ 00:14:54.244 { 00:14:54.244 "name": "BaseBdev1", 00:14:54.244 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:54.244 "is_configured": true, 00:14:54.244 "data_offset": 0, 00:14:54.244 "data_size": 65536 00:14:54.244 }, 00:14:54.244 { 00:14:54.244 "name": null, 00:14:54.244 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:54.244 "is_configured": false, 00:14:54.244 "data_offset": 0, 00:14:54.244 "data_size": 65536 00:14:54.244 }, 00:14:54.244 { 00:14:54.244 "name": "BaseBdev3", 00:14:54.244 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:54.244 "is_configured": true, 00:14:54.244 "data_offset": 0, 00:14:54.244 "data_size": 65536 00:14:54.244 } 00:14:54.244 ] 00:14:54.244 }' 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.244 17:07:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.812 [2024-11-20 17:07:18.532851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.812 "name": "Existed_Raid", 00:14:54.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.812 "strip_size_kb": 64, 00:14:54.812 "state": "configuring", 00:14:54.812 "raid_level": "raid5f", 00:14:54.812 "superblock": false, 00:14:54.812 "num_base_bdevs": 3, 00:14:54.812 "num_base_bdevs_discovered": 1, 00:14:54.812 "num_base_bdevs_operational": 3, 00:14:54.812 "base_bdevs_list": [ 00:14:54.812 { 00:14:54.812 "name": "BaseBdev1", 00:14:54.812 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:54.812 "is_configured": true, 00:14:54.812 "data_offset": 0, 00:14:54.812 "data_size": 65536 00:14:54.812 }, 00:14:54.812 { 00:14:54.812 "name": null, 00:14:54.812 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:54.812 "is_configured": false, 00:14:54.812 "data_offset": 0, 00:14:54.812 "data_size": 65536 00:14:54.812 }, 00:14:54.812 { 00:14:54.812 "name": null, 00:14:54.812 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:54.812 "is_configured": false, 00:14:54.812 "data_offset": 0, 00:14:54.812 "data_size": 65536 00:14:54.812 } 00:14:54.812 ] 00:14:54.812 }' 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.812 17:07:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.378 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 [2024-11-20 17:07:19.121053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.379 "name": "Existed_Raid", 00:14:55.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.379 "strip_size_kb": 64, 00:14:55.379 "state": "configuring", 00:14:55.379 "raid_level": "raid5f", 00:14:55.379 "superblock": false, 00:14:55.379 "num_base_bdevs": 3, 00:14:55.379 "num_base_bdevs_discovered": 2, 00:14:55.379 "num_base_bdevs_operational": 3, 00:14:55.379 "base_bdevs_list": [ 00:14:55.379 { 00:14:55.379 "name": "BaseBdev1", 00:14:55.379 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:55.379 "is_configured": true, 00:14:55.379 "data_offset": 0, 00:14:55.379 "data_size": 65536 00:14:55.379 }, 00:14:55.379 { 00:14:55.379 "name": null, 00:14:55.379 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:55.379 "is_configured": false, 00:14:55.379 "data_offset": 0, 00:14:55.379 "data_size": 65536 00:14:55.379 }, 00:14:55.379 { 00:14:55.379 "name": "BaseBdev3", 00:14:55.379 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:55.379 "is_configured": true, 00:14:55.379 "data_offset": 0, 00:14:55.379 "data_size": 65536 00:14:55.379 } 00:14:55.379 ] 00:14:55.379 }' 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.379 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.964 [2024-11-20 17:07:19.721326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.964 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.223 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.223 "name": "Existed_Raid", 00:14:56.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.223 "strip_size_kb": 64, 00:14:56.223 "state": "configuring", 00:14:56.223 "raid_level": "raid5f", 00:14:56.223 "superblock": false, 00:14:56.223 "num_base_bdevs": 3, 00:14:56.223 "num_base_bdevs_discovered": 1, 00:14:56.223 "num_base_bdevs_operational": 3, 00:14:56.223 "base_bdevs_list": [ 00:14:56.223 { 00:14:56.223 "name": null, 00:14:56.223 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:56.223 "is_configured": false, 00:14:56.224 "data_offset": 0, 00:14:56.224 "data_size": 65536 00:14:56.224 }, 00:14:56.224 { 00:14:56.224 "name": null, 00:14:56.224 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:56.224 "is_configured": false, 00:14:56.224 "data_offset": 0, 00:14:56.224 "data_size": 65536 00:14:56.224 }, 00:14:56.224 { 00:14:56.224 "name": "BaseBdev3", 00:14:56.224 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:56.224 "is_configured": true, 00:14:56.224 "data_offset": 0, 00:14:56.224 "data_size": 65536 00:14:56.224 } 00:14:56.224 ] 00:14:56.224 }' 00:14:56.224 17:07:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.224 17:07:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.482 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.482 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.482 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.482 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 [2024-11-20 17:07:20.393633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.741 "name": "Existed_Raid", 00:14:56.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.741 "strip_size_kb": 64, 00:14:56.741 "state": "configuring", 00:14:56.741 "raid_level": "raid5f", 00:14:56.741 "superblock": false, 00:14:56.741 "num_base_bdevs": 3, 00:14:56.741 "num_base_bdevs_discovered": 2, 00:14:56.741 "num_base_bdevs_operational": 3, 00:14:56.741 "base_bdevs_list": [ 00:14:56.741 { 00:14:56.741 "name": null, 00:14:56.741 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:56.741 "is_configured": false, 00:14:56.741 "data_offset": 0, 00:14:56.741 "data_size": 65536 00:14:56.741 }, 00:14:56.741 { 00:14:56.741 "name": "BaseBdev2", 00:14:56.741 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:56.741 "is_configured": true, 00:14:56.741 "data_offset": 0, 00:14:56.741 "data_size": 65536 00:14:56.741 }, 00:14:56.741 { 00:14:56.741 "name": "BaseBdev3", 00:14:56.741 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:56.741 "is_configured": true, 00:14:56.741 "data_offset": 0, 00:14:56.741 "data_size": 65536 00:14:56.741 } 00:14:56.741 ] 00:14:56.741 }' 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.741 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 17:07:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d60f32b-85c5-45a3-96fd-7dc493dc9d96 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 [2024-11-20 17:07:21.082674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:57.309 [2024-11-20 17:07:21.082974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:57.309 [2024-11-20 17:07:21.083005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:57.309 [2024-11-20 17:07:21.083344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:57.309 [2024-11-20 17:07:21.088697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:57.309 [2024-11-20 17:07:21.088881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:57.309 [2024-11-20 17:07:21.089365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.309 NewBaseBdev 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.309 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.309 [ 00:14:57.309 { 00:14:57.309 "name": "NewBaseBdev", 00:14:57.309 "aliases": [ 00:14:57.309 "8d60f32b-85c5-45a3-96fd-7dc493dc9d96" 00:14:57.309 ], 00:14:57.309 "product_name": "Malloc disk", 00:14:57.309 "block_size": 512, 00:14:57.309 "num_blocks": 65536, 00:14:57.309 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:57.309 "assigned_rate_limits": { 00:14:57.309 "rw_ios_per_sec": 0, 00:14:57.309 "rw_mbytes_per_sec": 0, 00:14:57.309 "r_mbytes_per_sec": 0, 00:14:57.309 "w_mbytes_per_sec": 0 00:14:57.309 }, 00:14:57.309 "claimed": true, 00:14:57.309 "claim_type": "exclusive_write", 00:14:57.309 "zoned": false, 00:14:57.309 "supported_io_types": { 00:14:57.309 "read": true, 00:14:57.309 "write": true, 00:14:57.309 "unmap": true, 00:14:57.309 "flush": true, 00:14:57.309 "reset": true, 00:14:57.309 "nvme_admin": false, 00:14:57.309 "nvme_io": false, 00:14:57.309 "nvme_io_md": false, 00:14:57.309 "write_zeroes": true, 00:14:57.309 "zcopy": true, 00:14:57.309 "get_zone_info": false, 00:14:57.309 "zone_management": false, 00:14:57.309 "zone_append": false, 00:14:57.309 "compare": false, 00:14:57.309 "compare_and_write": false, 00:14:57.309 "abort": true, 00:14:57.309 "seek_hole": false, 00:14:57.309 "seek_data": false, 00:14:57.309 "copy": true, 00:14:57.309 "nvme_iov_md": false 00:14:57.309 }, 00:14:57.310 "memory_domains": [ 00:14:57.310 { 00:14:57.310 "dma_device_id": "system", 00:14:57.310 "dma_device_type": 1 00:14:57.310 }, 00:14:57.310 { 00:14:57.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.310 "dma_device_type": 2 00:14:57.310 } 00:14:57.310 ], 00:14:57.310 "driver_specific": {} 00:14:57.310 } 00:14:57.310 ] 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.310 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.568 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.568 "name": "Existed_Raid", 00:14:57.568 "uuid": "8d169b39-14c0-4fc8-9d5f-d3f575dcba17", 00:14:57.568 "strip_size_kb": 64, 00:14:57.568 "state": "online", 00:14:57.568 "raid_level": "raid5f", 00:14:57.568 "superblock": false, 00:14:57.568 "num_base_bdevs": 3, 00:14:57.568 "num_base_bdevs_discovered": 3, 00:14:57.568 "num_base_bdevs_operational": 3, 00:14:57.568 "base_bdevs_list": [ 00:14:57.568 { 00:14:57.568 "name": "NewBaseBdev", 00:14:57.568 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:57.568 "is_configured": true, 00:14:57.568 "data_offset": 0, 00:14:57.568 "data_size": 65536 00:14:57.568 }, 00:14:57.568 { 00:14:57.568 "name": "BaseBdev2", 00:14:57.568 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:57.568 "is_configured": true, 00:14:57.568 "data_offset": 0, 00:14:57.568 "data_size": 65536 00:14:57.568 }, 00:14:57.568 { 00:14:57.568 "name": "BaseBdev3", 00:14:57.568 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:57.568 "is_configured": true, 00:14:57.568 "data_offset": 0, 00:14:57.568 "data_size": 65536 00:14:57.568 } 00:14:57.568 ] 00:14:57.568 }' 00:14:57.568 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.568 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 [2024-11-20 17:07:21.647652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.827 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.085 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.085 "name": "Existed_Raid", 00:14:58.085 "aliases": [ 00:14:58.085 "8d169b39-14c0-4fc8-9d5f-d3f575dcba17" 00:14:58.085 ], 00:14:58.085 "product_name": "Raid Volume", 00:14:58.085 "block_size": 512, 00:14:58.085 "num_blocks": 131072, 00:14:58.085 "uuid": "8d169b39-14c0-4fc8-9d5f-d3f575dcba17", 00:14:58.085 "assigned_rate_limits": { 00:14:58.085 "rw_ios_per_sec": 0, 00:14:58.085 "rw_mbytes_per_sec": 0, 00:14:58.085 "r_mbytes_per_sec": 0, 00:14:58.085 "w_mbytes_per_sec": 0 00:14:58.085 }, 00:14:58.085 "claimed": false, 00:14:58.085 "zoned": false, 00:14:58.085 "supported_io_types": { 00:14:58.085 "read": true, 00:14:58.085 "write": true, 00:14:58.085 "unmap": false, 00:14:58.085 "flush": false, 00:14:58.085 "reset": true, 00:14:58.085 "nvme_admin": false, 00:14:58.085 "nvme_io": false, 00:14:58.085 "nvme_io_md": false, 00:14:58.085 "write_zeroes": true, 00:14:58.085 "zcopy": false, 00:14:58.085 "get_zone_info": false, 00:14:58.085 "zone_management": false, 00:14:58.085 "zone_append": false, 00:14:58.085 "compare": false, 00:14:58.086 "compare_and_write": false, 00:14:58.086 "abort": false, 00:14:58.086 "seek_hole": false, 00:14:58.086 "seek_data": false, 00:14:58.086 "copy": false, 00:14:58.086 "nvme_iov_md": false 00:14:58.086 }, 00:14:58.086 "driver_specific": { 00:14:58.086 "raid": { 00:14:58.086 "uuid": "8d169b39-14c0-4fc8-9d5f-d3f575dcba17", 00:14:58.086 "strip_size_kb": 64, 00:14:58.086 "state": "online", 00:14:58.086 "raid_level": "raid5f", 00:14:58.086 "superblock": false, 00:14:58.086 "num_base_bdevs": 3, 00:14:58.086 "num_base_bdevs_discovered": 3, 00:14:58.086 "num_base_bdevs_operational": 3, 00:14:58.086 "base_bdevs_list": [ 00:14:58.086 { 00:14:58.086 "name": "NewBaseBdev", 00:14:58.086 "uuid": "8d60f32b-85c5-45a3-96fd-7dc493dc9d96", 00:14:58.086 "is_configured": true, 00:14:58.086 "data_offset": 0, 00:14:58.086 "data_size": 65536 00:14:58.086 }, 00:14:58.086 { 00:14:58.086 "name": "BaseBdev2", 00:14:58.086 "uuid": "f83bc16b-2f7a-42cb-8b7d-6e302a021a99", 00:14:58.086 "is_configured": true, 00:14:58.086 "data_offset": 0, 00:14:58.086 "data_size": 65536 00:14:58.086 }, 00:14:58.086 { 00:14:58.086 "name": "BaseBdev3", 00:14:58.086 "uuid": "71a0560c-d37c-422f-99c3-e844996b47a7", 00:14:58.086 "is_configured": true, 00:14:58.086 "data_offset": 0, 00:14:58.086 "data_size": 65536 00:14:58.086 } 00:14:58.086 ] 00:14:58.086 } 00:14:58.086 } 00:14:58.086 }' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:58.086 BaseBdev2 00:14:58.086 BaseBdev3' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.086 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.345 [2024-11-20 17:07:21.971486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.345 [2024-11-20 17:07:21.971512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.345 [2024-11-20 17:07:21.971615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.345 [2024-11-20 17:07:21.972020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.345 [2024-11-20 17:07:21.972140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79997 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79997 ']' 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79997 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.345 17:07:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79997 00:14:58.345 17:07:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.345 17:07:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.345 17:07:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79997' 00:14:58.345 killing process with pid 79997 00:14:58.345 17:07:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79997 00:14:58.345 [2024-11-20 17:07:22.012798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.345 17:07:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79997 00:14:58.603 [2024-11-20 17:07:22.257182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.539 17:07:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:59.540 00:14:59.540 real 0m11.979s 00:14:59.540 user 0m20.078s 00:14:59.540 sys 0m1.640s 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.540 ************************************ 00:14:59.540 END TEST raid5f_state_function_test 00:14:59.540 ************************************ 00:14:59.540 17:07:23 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:59.540 17:07:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:59.540 17:07:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.540 17:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.540 ************************************ 00:14:59.540 START TEST raid5f_state_function_test_sb 00:14:59.540 ************************************ 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80631 00:14:59.540 Process raid pid: 80631 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80631' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80631 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80631 ']' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.540 17:07:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.540 [2024-11-20 17:07:23.378509] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:14:59.540 [2024-11-20 17:07:23.378715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.799 [2024-11-20 17:07:23.561940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.058 [2024-11-20 17:07:23.671636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.059 [2024-11-20 17:07:23.869767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.059 [2024-11-20 17:07:23.869809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.625 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.625 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:00.625 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.625 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.625 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.625 [2024-11-20 17:07:24.302177] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.625 [2024-11-20 17:07:24.302418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.626 [2024-11-20 17:07:24.302537] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.626 [2024-11-20 17:07:24.302594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.626 [2024-11-20 17:07:24.302611] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.626 [2024-11-20 17:07:24.302626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.626 "name": "Existed_Raid", 00:15:00.626 "uuid": "a1e74761-e3f7-4b85-89b1-2161f7ef4ec6", 00:15:00.626 "strip_size_kb": 64, 00:15:00.626 "state": "configuring", 00:15:00.626 "raid_level": "raid5f", 00:15:00.626 "superblock": true, 00:15:00.626 "num_base_bdevs": 3, 00:15:00.626 "num_base_bdevs_discovered": 0, 00:15:00.626 "num_base_bdevs_operational": 3, 00:15:00.626 "base_bdevs_list": [ 00:15:00.626 { 00:15:00.626 "name": "BaseBdev1", 00:15:00.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.626 "is_configured": false, 00:15:00.626 "data_offset": 0, 00:15:00.626 "data_size": 0 00:15:00.626 }, 00:15:00.626 { 00:15:00.626 "name": "BaseBdev2", 00:15:00.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.626 "is_configured": false, 00:15:00.626 "data_offset": 0, 00:15:00.626 "data_size": 0 00:15:00.626 }, 00:15:00.626 { 00:15:00.626 "name": "BaseBdev3", 00:15:00.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.626 "is_configured": false, 00:15:00.626 "data_offset": 0, 00:15:00.626 "data_size": 0 00:15:00.626 } 00:15:00.626 ] 00:15:00.626 }' 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.626 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.193 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.193 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.193 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.193 [2024-11-20 17:07:24.826312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.193 [2024-11-20 17:07:24.826565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:01.193 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.193 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 [2024-11-20 17:07:24.834331] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.194 [2024-11-20 17:07:24.834540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.194 [2024-11-20 17:07:24.834658] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.194 [2024-11-20 17:07:24.834714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.194 [2024-11-20 17:07:24.834860] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.194 [2024-11-20 17:07:24.834919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 [2024-11-20 17:07:24.876759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.194 BaseBdev1 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 [ 00:15:01.194 { 00:15:01.194 "name": "BaseBdev1", 00:15:01.194 "aliases": [ 00:15:01.194 "cf050198-0254-45f1-87b6-f73268dcbc41" 00:15:01.194 ], 00:15:01.194 "product_name": "Malloc disk", 00:15:01.194 "block_size": 512, 00:15:01.194 "num_blocks": 65536, 00:15:01.194 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:01.194 "assigned_rate_limits": { 00:15:01.194 "rw_ios_per_sec": 0, 00:15:01.194 "rw_mbytes_per_sec": 0, 00:15:01.194 "r_mbytes_per_sec": 0, 00:15:01.194 "w_mbytes_per_sec": 0 00:15:01.194 }, 00:15:01.194 "claimed": true, 00:15:01.194 "claim_type": "exclusive_write", 00:15:01.194 "zoned": false, 00:15:01.194 "supported_io_types": { 00:15:01.194 "read": true, 00:15:01.194 "write": true, 00:15:01.194 "unmap": true, 00:15:01.194 "flush": true, 00:15:01.194 "reset": true, 00:15:01.194 "nvme_admin": false, 00:15:01.194 "nvme_io": false, 00:15:01.194 "nvme_io_md": false, 00:15:01.194 "write_zeroes": true, 00:15:01.194 "zcopy": true, 00:15:01.194 "get_zone_info": false, 00:15:01.194 "zone_management": false, 00:15:01.194 "zone_append": false, 00:15:01.194 "compare": false, 00:15:01.194 "compare_and_write": false, 00:15:01.194 "abort": true, 00:15:01.194 "seek_hole": false, 00:15:01.194 "seek_data": false, 00:15:01.194 "copy": true, 00:15:01.194 "nvme_iov_md": false 00:15:01.194 }, 00:15:01.194 "memory_domains": [ 00:15:01.194 { 00:15:01.194 "dma_device_id": "system", 00:15:01.194 "dma_device_type": 1 00:15:01.194 }, 00:15:01.194 { 00:15:01.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.194 "dma_device_type": 2 00:15:01.194 } 00:15:01.194 ], 00:15:01.194 "driver_specific": {} 00:15:01.194 } 00:15:01.194 ] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.194 "name": "Existed_Raid", 00:15:01.194 "uuid": "82ed482d-9b36-4447-bae0-76b347f54947", 00:15:01.194 "strip_size_kb": 64, 00:15:01.194 "state": "configuring", 00:15:01.194 "raid_level": "raid5f", 00:15:01.194 "superblock": true, 00:15:01.194 "num_base_bdevs": 3, 00:15:01.194 "num_base_bdevs_discovered": 1, 00:15:01.194 "num_base_bdevs_operational": 3, 00:15:01.194 "base_bdevs_list": [ 00:15:01.194 { 00:15:01.194 "name": "BaseBdev1", 00:15:01.194 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:01.194 "is_configured": true, 00:15:01.194 "data_offset": 2048, 00:15:01.194 "data_size": 63488 00:15:01.194 }, 00:15:01.194 { 00:15:01.194 "name": "BaseBdev2", 00:15:01.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.194 "is_configured": false, 00:15:01.194 "data_offset": 0, 00:15:01.194 "data_size": 0 00:15:01.194 }, 00:15:01.194 { 00:15:01.194 "name": "BaseBdev3", 00:15:01.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.194 "is_configured": false, 00:15:01.194 "data_offset": 0, 00:15:01.194 "data_size": 0 00:15:01.194 } 00:15:01.194 ] 00:15:01.194 }' 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.194 17:07:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.764 [2024-11-20 17:07:25.429048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.764 [2024-11-20 17:07:25.429155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.764 [2024-11-20 17:07:25.437152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.764 [2024-11-20 17:07:25.439953] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.764 [2024-11-20 17:07:25.440191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.764 [2024-11-20 17:07:25.440324] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.764 [2024-11-20 17:07:25.440393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.764 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.764 "name": "Existed_Raid", 00:15:01.764 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:01.764 "strip_size_kb": 64, 00:15:01.764 "state": "configuring", 00:15:01.764 "raid_level": "raid5f", 00:15:01.764 "superblock": true, 00:15:01.764 "num_base_bdevs": 3, 00:15:01.764 "num_base_bdevs_discovered": 1, 00:15:01.764 "num_base_bdevs_operational": 3, 00:15:01.764 "base_bdevs_list": [ 00:15:01.764 { 00:15:01.764 "name": "BaseBdev1", 00:15:01.764 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:01.764 "is_configured": true, 00:15:01.764 "data_offset": 2048, 00:15:01.764 "data_size": 63488 00:15:01.765 }, 00:15:01.765 { 00:15:01.765 "name": "BaseBdev2", 00:15:01.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.765 "is_configured": false, 00:15:01.765 "data_offset": 0, 00:15:01.765 "data_size": 0 00:15:01.765 }, 00:15:01.765 { 00:15:01.765 "name": "BaseBdev3", 00:15:01.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.765 "is_configured": false, 00:15:01.765 "data_offset": 0, 00:15:01.765 "data_size": 0 00:15:01.765 } 00:15:01.765 ] 00:15:01.765 }' 00:15:01.765 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.765 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.332 [2024-11-20 17:07:25.995379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.332 BaseBdev2 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.332 17:07:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.332 [ 00:15:02.332 { 00:15:02.332 "name": "BaseBdev2", 00:15:02.332 "aliases": [ 00:15:02.332 "cfb57040-4232-4863-a38c-77a64dbe46db" 00:15:02.332 ], 00:15:02.332 "product_name": "Malloc disk", 00:15:02.332 "block_size": 512, 00:15:02.332 "num_blocks": 65536, 00:15:02.332 "uuid": "cfb57040-4232-4863-a38c-77a64dbe46db", 00:15:02.332 "assigned_rate_limits": { 00:15:02.332 "rw_ios_per_sec": 0, 00:15:02.332 "rw_mbytes_per_sec": 0, 00:15:02.332 "r_mbytes_per_sec": 0, 00:15:02.332 "w_mbytes_per_sec": 0 00:15:02.332 }, 00:15:02.332 "claimed": true, 00:15:02.332 "claim_type": "exclusive_write", 00:15:02.332 "zoned": false, 00:15:02.332 "supported_io_types": { 00:15:02.332 "read": true, 00:15:02.332 "write": true, 00:15:02.332 "unmap": true, 00:15:02.332 "flush": true, 00:15:02.332 "reset": true, 00:15:02.332 "nvme_admin": false, 00:15:02.332 "nvme_io": false, 00:15:02.332 "nvme_io_md": false, 00:15:02.332 "write_zeroes": true, 00:15:02.332 "zcopy": true, 00:15:02.332 "get_zone_info": false, 00:15:02.332 "zone_management": false, 00:15:02.332 "zone_append": false, 00:15:02.332 "compare": false, 00:15:02.332 "compare_and_write": false, 00:15:02.332 "abort": true, 00:15:02.332 "seek_hole": false, 00:15:02.332 "seek_data": false, 00:15:02.332 "copy": true, 00:15:02.332 "nvme_iov_md": false 00:15:02.332 }, 00:15:02.332 "memory_domains": [ 00:15:02.332 { 00:15:02.332 "dma_device_id": "system", 00:15:02.332 "dma_device_type": 1 00:15:02.332 }, 00:15:02.332 { 00:15:02.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.332 "dma_device_type": 2 00:15:02.332 } 00:15:02.332 ], 00:15:02.332 "driver_specific": {} 00:15:02.332 } 00:15:02.332 ] 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.332 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.332 "name": "Existed_Raid", 00:15:02.332 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:02.332 "strip_size_kb": 64, 00:15:02.332 "state": "configuring", 00:15:02.332 "raid_level": "raid5f", 00:15:02.332 "superblock": true, 00:15:02.332 "num_base_bdevs": 3, 00:15:02.332 "num_base_bdevs_discovered": 2, 00:15:02.332 "num_base_bdevs_operational": 3, 00:15:02.332 "base_bdevs_list": [ 00:15:02.332 { 00:15:02.332 "name": "BaseBdev1", 00:15:02.332 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:02.332 "is_configured": true, 00:15:02.332 "data_offset": 2048, 00:15:02.333 "data_size": 63488 00:15:02.333 }, 00:15:02.333 { 00:15:02.333 "name": "BaseBdev2", 00:15:02.333 "uuid": "cfb57040-4232-4863-a38c-77a64dbe46db", 00:15:02.333 "is_configured": true, 00:15:02.333 "data_offset": 2048, 00:15:02.333 "data_size": 63488 00:15:02.333 }, 00:15:02.333 { 00:15:02.333 "name": "BaseBdev3", 00:15:02.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.333 "is_configured": false, 00:15:02.333 "data_offset": 0, 00:15:02.333 "data_size": 0 00:15:02.333 } 00:15:02.333 ] 00:15:02.333 }' 00:15:02.333 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.333 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 [2024-11-20 17:07:26.619096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.898 [2024-11-20 17:07:26.619623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.898 [2024-11-20 17:07:26.619659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.898 BaseBdev3 00:15:02.898 [2024-11-20 17:07:26.620193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 [2024-11-20 17:07:26.625483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.898 [2024-11-20 17:07:26.625625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.898 [2024-11-20 17:07:26.626153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 [ 00:15:02.898 { 00:15:02.898 "name": "BaseBdev3", 00:15:02.898 "aliases": [ 00:15:02.898 "8ec36eb0-076b-41a9-880c-1a354af9c0c8" 00:15:02.898 ], 00:15:02.898 "product_name": "Malloc disk", 00:15:02.898 "block_size": 512, 00:15:02.898 "num_blocks": 65536, 00:15:02.898 "uuid": "8ec36eb0-076b-41a9-880c-1a354af9c0c8", 00:15:02.898 "assigned_rate_limits": { 00:15:02.898 "rw_ios_per_sec": 0, 00:15:02.898 "rw_mbytes_per_sec": 0, 00:15:02.898 "r_mbytes_per_sec": 0, 00:15:02.898 "w_mbytes_per_sec": 0 00:15:02.898 }, 00:15:02.898 "claimed": true, 00:15:02.898 "claim_type": "exclusive_write", 00:15:02.898 "zoned": false, 00:15:02.898 "supported_io_types": { 00:15:02.898 "read": true, 00:15:02.898 "write": true, 00:15:02.898 "unmap": true, 00:15:02.898 "flush": true, 00:15:02.898 "reset": true, 00:15:02.898 "nvme_admin": false, 00:15:02.898 "nvme_io": false, 00:15:02.898 "nvme_io_md": false, 00:15:02.898 "write_zeroes": true, 00:15:02.898 "zcopy": true, 00:15:02.898 "get_zone_info": false, 00:15:02.898 "zone_management": false, 00:15:02.898 "zone_append": false, 00:15:02.898 "compare": false, 00:15:02.898 "compare_and_write": false, 00:15:02.898 "abort": true, 00:15:02.898 "seek_hole": false, 00:15:02.898 "seek_data": false, 00:15:02.898 "copy": true, 00:15:02.898 "nvme_iov_md": false 00:15:02.898 }, 00:15:02.898 "memory_domains": [ 00:15:02.898 { 00:15:02.898 "dma_device_id": "system", 00:15:02.898 "dma_device_type": 1 00:15:02.898 }, 00:15:02.898 { 00:15:02.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.898 "dma_device_type": 2 00:15:02.898 } 00:15:02.898 ], 00:15:02.898 "driver_specific": {} 00:15:02.898 } 00:15:02.898 ] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.898 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.898 "name": "Existed_Raid", 00:15:02.898 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:02.898 "strip_size_kb": 64, 00:15:02.898 "state": "online", 00:15:02.898 "raid_level": "raid5f", 00:15:02.899 "superblock": true, 00:15:02.899 "num_base_bdevs": 3, 00:15:02.899 "num_base_bdevs_discovered": 3, 00:15:02.899 "num_base_bdevs_operational": 3, 00:15:02.899 "base_bdevs_list": [ 00:15:02.899 { 00:15:02.899 "name": "BaseBdev1", 00:15:02.899 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:02.899 "is_configured": true, 00:15:02.899 "data_offset": 2048, 00:15:02.899 "data_size": 63488 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "name": "BaseBdev2", 00:15:02.899 "uuid": "cfb57040-4232-4863-a38c-77a64dbe46db", 00:15:02.899 "is_configured": true, 00:15:02.899 "data_offset": 2048, 00:15:02.899 "data_size": 63488 00:15:02.899 }, 00:15:02.899 { 00:15:02.899 "name": "BaseBdev3", 00:15:02.899 "uuid": "8ec36eb0-076b-41a9-880c-1a354af9c0c8", 00:15:02.899 "is_configured": true, 00:15:02.899 "data_offset": 2048, 00:15:02.899 "data_size": 63488 00:15:02.899 } 00:15:02.899 ] 00:15:02.899 }' 00:15:02.899 17:07:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.899 17:07:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.464 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.464 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.464 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.464 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.465 [2024-11-20 17:07:27.192405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.465 "name": "Existed_Raid", 00:15:03.465 "aliases": [ 00:15:03.465 "6b25ea34-569f-401d-8079-0fe215e8b971" 00:15:03.465 ], 00:15:03.465 "product_name": "Raid Volume", 00:15:03.465 "block_size": 512, 00:15:03.465 "num_blocks": 126976, 00:15:03.465 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:03.465 "assigned_rate_limits": { 00:15:03.465 "rw_ios_per_sec": 0, 00:15:03.465 "rw_mbytes_per_sec": 0, 00:15:03.465 "r_mbytes_per_sec": 0, 00:15:03.465 "w_mbytes_per_sec": 0 00:15:03.465 }, 00:15:03.465 "claimed": false, 00:15:03.465 "zoned": false, 00:15:03.465 "supported_io_types": { 00:15:03.465 "read": true, 00:15:03.465 "write": true, 00:15:03.465 "unmap": false, 00:15:03.465 "flush": false, 00:15:03.465 "reset": true, 00:15:03.465 "nvme_admin": false, 00:15:03.465 "nvme_io": false, 00:15:03.465 "nvme_io_md": false, 00:15:03.465 "write_zeroes": true, 00:15:03.465 "zcopy": false, 00:15:03.465 "get_zone_info": false, 00:15:03.465 "zone_management": false, 00:15:03.465 "zone_append": false, 00:15:03.465 "compare": false, 00:15:03.465 "compare_and_write": false, 00:15:03.465 "abort": false, 00:15:03.465 "seek_hole": false, 00:15:03.465 "seek_data": false, 00:15:03.465 "copy": false, 00:15:03.465 "nvme_iov_md": false 00:15:03.465 }, 00:15:03.465 "driver_specific": { 00:15:03.465 "raid": { 00:15:03.465 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:03.465 "strip_size_kb": 64, 00:15:03.465 "state": "online", 00:15:03.465 "raid_level": "raid5f", 00:15:03.465 "superblock": true, 00:15:03.465 "num_base_bdevs": 3, 00:15:03.465 "num_base_bdevs_discovered": 3, 00:15:03.465 "num_base_bdevs_operational": 3, 00:15:03.465 "base_bdevs_list": [ 00:15:03.465 { 00:15:03.465 "name": "BaseBdev1", 00:15:03.465 "uuid": "cf050198-0254-45f1-87b6-f73268dcbc41", 00:15:03.465 "is_configured": true, 00:15:03.465 "data_offset": 2048, 00:15:03.465 "data_size": 63488 00:15:03.465 }, 00:15:03.465 { 00:15:03.465 "name": "BaseBdev2", 00:15:03.465 "uuid": "cfb57040-4232-4863-a38c-77a64dbe46db", 00:15:03.465 "is_configured": true, 00:15:03.465 "data_offset": 2048, 00:15:03.465 "data_size": 63488 00:15:03.465 }, 00:15:03.465 { 00:15:03.465 "name": "BaseBdev3", 00:15:03.465 "uuid": "8ec36eb0-076b-41a9-880c-1a354af9c0c8", 00:15:03.465 "is_configured": true, 00:15:03.465 "data_offset": 2048, 00:15:03.465 "data_size": 63488 00:15:03.465 } 00:15:03.465 ] 00:15:03.465 } 00:15:03.465 } 00:15:03.465 }' 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:03.465 BaseBdev2 00:15:03.465 BaseBdev3' 00:15:03.465 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.731 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.731 [2024-11-20 17:07:27.536299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.989 "name": "Existed_Raid", 00:15:03.989 "uuid": "6b25ea34-569f-401d-8079-0fe215e8b971", 00:15:03.989 "strip_size_kb": 64, 00:15:03.989 "state": "online", 00:15:03.989 "raid_level": "raid5f", 00:15:03.989 "superblock": true, 00:15:03.989 "num_base_bdevs": 3, 00:15:03.989 "num_base_bdevs_discovered": 2, 00:15:03.989 "num_base_bdevs_operational": 2, 00:15:03.989 "base_bdevs_list": [ 00:15:03.989 { 00:15:03.989 "name": null, 00:15:03.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.989 "is_configured": false, 00:15:03.989 "data_offset": 0, 00:15:03.989 "data_size": 63488 00:15:03.989 }, 00:15:03.989 { 00:15:03.989 "name": "BaseBdev2", 00:15:03.989 "uuid": "cfb57040-4232-4863-a38c-77a64dbe46db", 00:15:03.989 "is_configured": true, 00:15:03.989 "data_offset": 2048, 00:15:03.989 "data_size": 63488 00:15:03.989 }, 00:15:03.989 { 00:15:03.989 "name": "BaseBdev3", 00:15:03.989 "uuid": "8ec36eb0-076b-41a9-880c-1a354af9c0c8", 00:15:03.989 "is_configured": true, 00:15:03.989 "data_offset": 2048, 00:15:03.989 "data_size": 63488 00:15:03.989 } 00:15:03.989 ] 00:15:03.989 }' 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.989 17:07:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 [2024-11-20 17:07:28.238303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.557 [2024-11-20 17:07:28.238635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.557 [2024-11-20 17:07:28.318436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.557 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 [2024-11-20 17:07:28.378447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.557 [2024-11-20 17:07:28.378657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 BaseBdev2 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.816 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [ 00:15:04.817 { 00:15:04.817 "name": "BaseBdev2", 00:15:04.817 "aliases": [ 00:15:04.817 "398a9d05-7acf-45cc-8a1c-d1cec4489b77" 00:15:04.817 ], 00:15:04.817 "product_name": "Malloc disk", 00:15:04.817 "block_size": 512, 00:15:04.817 "num_blocks": 65536, 00:15:04.817 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:04.817 "assigned_rate_limits": { 00:15:04.817 "rw_ios_per_sec": 0, 00:15:04.817 "rw_mbytes_per_sec": 0, 00:15:04.817 "r_mbytes_per_sec": 0, 00:15:04.817 "w_mbytes_per_sec": 0 00:15:04.817 }, 00:15:04.817 "claimed": false, 00:15:04.817 "zoned": false, 00:15:04.817 "supported_io_types": { 00:15:04.817 "read": true, 00:15:04.817 "write": true, 00:15:04.817 "unmap": true, 00:15:04.817 "flush": true, 00:15:04.817 "reset": true, 00:15:04.817 "nvme_admin": false, 00:15:04.817 "nvme_io": false, 00:15:04.817 "nvme_io_md": false, 00:15:04.817 "write_zeroes": true, 00:15:04.817 "zcopy": true, 00:15:04.817 "get_zone_info": false, 00:15:04.817 "zone_management": false, 00:15:04.817 "zone_append": false, 00:15:04.817 "compare": false, 00:15:04.817 "compare_and_write": false, 00:15:04.817 "abort": true, 00:15:04.817 "seek_hole": false, 00:15:04.817 "seek_data": false, 00:15:04.817 "copy": true, 00:15:04.817 "nvme_iov_md": false 00:15:04.817 }, 00:15:04.817 "memory_domains": [ 00:15:04.817 { 00:15:04.817 "dma_device_id": "system", 00:15:04.817 "dma_device_type": 1 00:15:04.817 }, 00:15:04.817 { 00:15:04.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.817 "dma_device_type": 2 00:15:04.817 } 00:15:04.817 ], 00:15:04.817 "driver_specific": {} 00:15:04.817 } 00:15:04.817 ] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 BaseBdev3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [ 00:15:04.817 { 00:15:04.817 "name": "BaseBdev3", 00:15:04.817 "aliases": [ 00:15:04.817 "97321e90-444b-4b5c-a782-ef8ee390475e" 00:15:04.817 ], 00:15:04.817 "product_name": "Malloc disk", 00:15:04.817 "block_size": 512, 00:15:04.817 "num_blocks": 65536, 00:15:04.817 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:04.817 "assigned_rate_limits": { 00:15:04.817 "rw_ios_per_sec": 0, 00:15:04.817 "rw_mbytes_per_sec": 0, 00:15:04.817 "r_mbytes_per_sec": 0, 00:15:04.817 "w_mbytes_per_sec": 0 00:15:04.817 }, 00:15:04.817 "claimed": false, 00:15:04.817 "zoned": false, 00:15:04.817 "supported_io_types": { 00:15:04.817 "read": true, 00:15:04.817 "write": true, 00:15:04.817 "unmap": true, 00:15:04.817 "flush": true, 00:15:04.817 "reset": true, 00:15:04.817 "nvme_admin": false, 00:15:04.817 "nvme_io": false, 00:15:04.817 "nvme_io_md": false, 00:15:04.817 "write_zeroes": true, 00:15:04.817 "zcopy": true, 00:15:04.817 "get_zone_info": false, 00:15:04.817 "zone_management": false, 00:15:04.817 "zone_append": false, 00:15:04.817 "compare": false, 00:15:04.817 "compare_and_write": false, 00:15:04.817 "abort": true, 00:15:04.817 "seek_hole": false, 00:15:04.817 "seek_data": false, 00:15:04.817 "copy": true, 00:15:04.817 "nvme_iov_md": false 00:15:04.817 }, 00:15:04.817 "memory_domains": [ 00:15:04.817 { 00:15:04.817 "dma_device_id": "system", 00:15:04.817 "dma_device_type": 1 00:15:04.817 }, 00:15:04.817 { 00:15:04.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.817 "dma_device_type": 2 00:15:04.817 } 00:15:04.817 ], 00:15:04.817 "driver_specific": {} 00:15:04.817 } 00:15:04.817 ] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [2024-11-20 17:07:28.676366] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.817 [2024-11-20 17:07:28.676581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.817 [2024-11-20 17:07:28.676630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.817 [2024-11-20 17:07:28.679023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.817 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.075 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.076 "name": "Existed_Raid", 00:15:05.076 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:05.076 "strip_size_kb": 64, 00:15:05.076 "state": "configuring", 00:15:05.076 "raid_level": "raid5f", 00:15:05.076 "superblock": true, 00:15:05.076 "num_base_bdevs": 3, 00:15:05.076 "num_base_bdevs_discovered": 2, 00:15:05.076 "num_base_bdevs_operational": 3, 00:15:05.076 "base_bdevs_list": [ 00:15:05.076 { 00:15:05.076 "name": "BaseBdev1", 00:15:05.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.076 "is_configured": false, 00:15:05.076 "data_offset": 0, 00:15:05.076 "data_size": 0 00:15:05.076 }, 00:15:05.076 { 00:15:05.076 "name": "BaseBdev2", 00:15:05.076 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:05.076 "is_configured": true, 00:15:05.076 "data_offset": 2048, 00:15:05.076 "data_size": 63488 00:15:05.076 }, 00:15:05.076 { 00:15:05.076 "name": "BaseBdev3", 00:15:05.076 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:05.076 "is_configured": true, 00:15:05.076 "data_offset": 2048, 00:15:05.076 "data_size": 63488 00:15:05.076 } 00:15:05.076 ] 00:15:05.076 }' 00:15:05.076 17:07:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.076 17:07:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.643 [2024-11-20 17:07:29.212496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.643 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.643 "name": "Existed_Raid", 00:15:05.643 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:05.643 "strip_size_kb": 64, 00:15:05.643 "state": "configuring", 00:15:05.643 "raid_level": "raid5f", 00:15:05.643 "superblock": true, 00:15:05.643 "num_base_bdevs": 3, 00:15:05.643 "num_base_bdevs_discovered": 1, 00:15:05.643 "num_base_bdevs_operational": 3, 00:15:05.643 "base_bdevs_list": [ 00:15:05.643 { 00:15:05.643 "name": "BaseBdev1", 00:15:05.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.643 "is_configured": false, 00:15:05.643 "data_offset": 0, 00:15:05.643 "data_size": 0 00:15:05.643 }, 00:15:05.643 { 00:15:05.643 "name": null, 00:15:05.643 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:05.644 "is_configured": false, 00:15:05.644 "data_offset": 0, 00:15:05.644 "data_size": 63488 00:15:05.644 }, 00:15:05.644 { 00:15:05.644 "name": "BaseBdev3", 00:15:05.644 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:05.644 "is_configured": true, 00:15:05.644 "data_offset": 2048, 00:15:05.644 "data_size": 63488 00:15:05.644 } 00:15:05.644 ] 00:15:05.644 }' 00:15:05.644 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.644 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.902 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.902 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.902 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.902 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.902 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 [2024-11-20 17:07:29.832303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.161 BaseBdev1 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 [ 00:15:06.161 { 00:15:06.161 "name": "BaseBdev1", 00:15:06.161 "aliases": [ 00:15:06.161 "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6" 00:15:06.161 ], 00:15:06.161 "product_name": "Malloc disk", 00:15:06.161 "block_size": 512, 00:15:06.161 "num_blocks": 65536, 00:15:06.161 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:06.161 "assigned_rate_limits": { 00:15:06.161 "rw_ios_per_sec": 0, 00:15:06.161 "rw_mbytes_per_sec": 0, 00:15:06.161 "r_mbytes_per_sec": 0, 00:15:06.161 "w_mbytes_per_sec": 0 00:15:06.161 }, 00:15:06.161 "claimed": true, 00:15:06.161 "claim_type": "exclusive_write", 00:15:06.161 "zoned": false, 00:15:06.161 "supported_io_types": { 00:15:06.161 "read": true, 00:15:06.161 "write": true, 00:15:06.161 "unmap": true, 00:15:06.161 "flush": true, 00:15:06.161 "reset": true, 00:15:06.161 "nvme_admin": false, 00:15:06.161 "nvme_io": false, 00:15:06.161 "nvme_io_md": false, 00:15:06.161 "write_zeroes": true, 00:15:06.161 "zcopy": true, 00:15:06.161 "get_zone_info": false, 00:15:06.161 "zone_management": false, 00:15:06.161 "zone_append": false, 00:15:06.161 "compare": false, 00:15:06.161 "compare_and_write": false, 00:15:06.161 "abort": true, 00:15:06.161 "seek_hole": false, 00:15:06.161 "seek_data": false, 00:15:06.161 "copy": true, 00:15:06.161 "nvme_iov_md": false 00:15:06.161 }, 00:15:06.161 "memory_domains": [ 00:15:06.161 { 00:15:06.161 "dma_device_id": "system", 00:15:06.161 "dma_device_type": 1 00:15:06.161 }, 00:15:06.161 { 00:15:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.161 "dma_device_type": 2 00:15:06.161 } 00:15:06.161 ], 00:15:06.161 "driver_specific": {} 00:15:06.161 } 00:15:06.161 ] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.161 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.161 "name": "Existed_Raid", 00:15:06.161 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:06.161 "strip_size_kb": 64, 00:15:06.161 "state": "configuring", 00:15:06.161 "raid_level": "raid5f", 00:15:06.161 "superblock": true, 00:15:06.161 "num_base_bdevs": 3, 00:15:06.161 "num_base_bdevs_discovered": 2, 00:15:06.161 "num_base_bdevs_operational": 3, 00:15:06.161 "base_bdevs_list": [ 00:15:06.161 { 00:15:06.161 "name": "BaseBdev1", 00:15:06.161 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:06.161 "is_configured": true, 00:15:06.161 "data_offset": 2048, 00:15:06.161 "data_size": 63488 00:15:06.161 }, 00:15:06.161 { 00:15:06.161 "name": null, 00:15:06.161 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:06.161 "is_configured": false, 00:15:06.161 "data_offset": 0, 00:15:06.161 "data_size": 63488 00:15:06.161 }, 00:15:06.161 { 00:15:06.161 "name": "BaseBdev3", 00:15:06.161 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:06.161 "is_configured": true, 00:15:06.161 "data_offset": 2048, 00:15:06.161 "data_size": 63488 00:15:06.161 } 00:15:06.162 ] 00:15:06.162 }' 00:15:06.162 17:07:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.162 17:07:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.729 [2024-11-20 17:07:30.448518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.729 "name": "Existed_Raid", 00:15:06.729 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:06.729 "strip_size_kb": 64, 00:15:06.729 "state": "configuring", 00:15:06.729 "raid_level": "raid5f", 00:15:06.729 "superblock": true, 00:15:06.729 "num_base_bdevs": 3, 00:15:06.729 "num_base_bdevs_discovered": 1, 00:15:06.729 "num_base_bdevs_operational": 3, 00:15:06.729 "base_bdevs_list": [ 00:15:06.729 { 00:15:06.729 "name": "BaseBdev1", 00:15:06.729 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:06.729 "is_configured": true, 00:15:06.729 "data_offset": 2048, 00:15:06.729 "data_size": 63488 00:15:06.729 }, 00:15:06.729 { 00:15:06.729 "name": null, 00:15:06.729 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:06.729 "is_configured": false, 00:15:06.729 "data_offset": 0, 00:15:06.729 "data_size": 63488 00:15:06.729 }, 00:15:06.729 { 00:15:06.729 "name": null, 00:15:06.729 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:06.729 "is_configured": false, 00:15:06.729 "data_offset": 0, 00:15:06.729 "data_size": 63488 00:15:06.729 } 00:15:06.729 ] 00:15:06.729 }' 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.729 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.296 17:07:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.296 [2024-11-20 17:07:30.996716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.296 "name": "Existed_Raid", 00:15:07.296 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:07.296 "strip_size_kb": 64, 00:15:07.296 "state": "configuring", 00:15:07.296 "raid_level": "raid5f", 00:15:07.296 "superblock": true, 00:15:07.296 "num_base_bdevs": 3, 00:15:07.296 "num_base_bdevs_discovered": 2, 00:15:07.296 "num_base_bdevs_operational": 3, 00:15:07.296 "base_bdevs_list": [ 00:15:07.296 { 00:15:07.296 "name": "BaseBdev1", 00:15:07.296 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:07.296 "is_configured": true, 00:15:07.296 "data_offset": 2048, 00:15:07.296 "data_size": 63488 00:15:07.296 }, 00:15:07.296 { 00:15:07.296 "name": null, 00:15:07.296 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:07.296 "is_configured": false, 00:15:07.296 "data_offset": 0, 00:15:07.296 "data_size": 63488 00:15:07.296 }, 00:15:07.296 { 00:15:07.296 "name": "BaseBdev3", 00:15:07.296 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:07.296 "is_configured": true, 00:15:07.296 "data_offset": 2048, 00:15:07.296 "data_size": 63488 00:15:07.296 } 00:15:07.296 ] 00:15:07.296 }' 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.296 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.864 [2024-11-20 17:07:31.580906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.864 "name": "Existed_Raid", 00:15:07.864 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:07.864 "strip_size_kb": 64, 00:15:07.864 "state": "configuring", 00:15:07.864 "raid_level": "raid5f", 00:15:07.864 "superblock": true, 00:15:07.864 "num_base_bdevs": 3, 00:15:07.864 "num_base_bdevs_discovered": 1, 00:15:07.864 "num_base_bdevs_operational": 3, 00:15:07.864 "base_bdevs_list": [ 00:15:07.864 { 00:15:07.864 "name": null, 00:15:07.864 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:07.864 "is_configured": false, 00:15:07.864 "data_offset": 0, 00:15:07.864 "data_size": 63488 00:15:07.864 }, 00:15:07.864 { 00:15:07.864 "name": null, 00:15:07.864 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:07.864 "is_configured": false, 00:15:07.864 "data_offset": 0, 00:15:07.864 "data_size": 63488 00:15:07.864 }, 00:15:07.864 { 00:15:07.864 "name": "BaseBdev3", 00:15:07.864 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:07.864 "is_configured": true, 00:15:07.864 "data_offset": 2048, 00:15:07.864 "data_size": 63488 00:15:07.864 } 00:15:07.864 ] 00:15:07.864 }' 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.864 17:07:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:08.431 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.432 [2024-11-20 17:07:32.244367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.432 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.691 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.691 "name": "Existed_Raid", 00:15:08.691 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:08.691 "strip_size_kb": 64, 00:15:08.691 "state": "configuring", 00:15:08.691 "raid_level": "raid5f", 00:15:08.691 "superblock": true, 00:15:08.691 "num_base_bdevs": 3, 00:15:08.691 "num_base_bdevs_discovered": 2, 00:15:08.691 "num_base_bdevs_operational": 3, 00:15:08.691 "base_bdevs_list": [ 00:15:08.691 { 00:15:08.691 "name": null, 00:15:08.691 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:08.691 "is_configured": false, 00:15:08.691 "data_offset": 0, 00:15:08.691 "data_size": 63488 00:15:08.691 }, 00:15:08.691 { 00:15:08.691 "name": "BaseBdev2", 00:15:08.691 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:08.691 "is_configured": true, 00:15:08.691 "data_offset": 2048, 00:15:08.691 "data_size": 63488 00:15:08.691 }, 00:15:08.691 { 00:15:08.691 "name": "BaseBdev3", 00:15:08.691 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:08.691 "is_configured": true, 00:15:08.691 "data_offset": 2048, 00:15:08.691 "data_size": 63488 00:15:08.691 } 00:15:08.691 ] 00:15:08.691 }' 00:15:08.691 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.691 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.949 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.949 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.949 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21c8819c-d2b7-4b15-bec6-1f9ca317e0d6 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [2024-11-20 17:07:32.938359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:09.208 NewBaseBdev 00:15:09.208 [2024-11-20 17:07:32.938829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:09.208 [2024-11-20 17:07:32.938861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.208 [2024-11-20 17:07:32.939164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [2024-11-20 17:07:32.944028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:09.208 [2024-11-20 17:07:32.944170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:09.208 [2024-11-20 17:07:32.944597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [ 00:15:09.208 { 00:15:09.208 "name": "NewBaseBdev", 00:15:09.208 "aliases": [ 00:15:09.208 "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6" 00:15:09.208 ], 00:15:09.208 "product_name": "Malloc disk", 00:15:09.208 "block_size": 512, 00:15:09.208 "num_blocks": 65536, 00:15:09.208 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:09.208 "assigned_rate_limits": { 00:15:09.208 "rw_ios_per_sec": 0, 00:15:09.208 "rw_mbytes_per_sec": 0, 00:15:09.208 "r_mbytes_per_sec": 0, 00:15:09.208 "w_mbytes_per_sec": 0 00:15:09.208 }, 00:15:09.208 "claimed": true, 00:15:09.208 "claim_type": "exclusive_write", 00:15:09.208 "zoned": false, 00:15:09.208 "supported_io_types": { 00:15:09.208 "read": true, 00:15:09.208 "write": true, 00:15:09.208 "unmap": true, 00:15:09.208 "flush": true, 00:15:09.208 "reset": true, 00:15:09.208 "nvme_admin": false, 00:15:09.208 "nvme_io": false, 00:15:09.208 "nvme_io_md": false, 00:15:09.208 "write_zeroes": true, 00:15:09.208 "zcopy": true, 00:15:09.208 "get_zone_info": false, 00:15:09.208 "zone_management": false, 00:15:09.208 "zone_append": false, 00:15:09.208 "compare": false, 00:15:09.208 "compare_and_write": false, 00:15:09.208 "abort": true, 00:15:09.208 "seek_hole": false, 00:15:09.208 "seek_data": false, 00:15:09.208 "copy": true, 00:15:09.208 "nvme_iov_md": false 00:15:09.208 }, 00:15:09.208 "memory_domains": [ 00:15:09.208 { 00:15:09.208 "dma_device_id": "system", 00:15:09.208 "dma_device_type": 1 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.208 "dma_device_type": 2 00:15:09.208 } 00:15:09.208 ], 00:15:09.208 "driver_specific": {} 00:15:09.208 } 00:15:09.208 ] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 17:07:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.208 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.208 "name": "Existed_Raid", 00:15:09.208 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:09.208 "strip_size_kb": 64, 00:15:09.208 "state": "online", 00:15:09.208 "raid_level": "raid5f", 00:15:09.208 "superblock": true, 00:15:09.208 "num_base_bdevs": 3, 00:15:09.208 "num_base_bdevs_discovered": 3, 00:15:09.208 "num_base_bdevs_operational": 3, 00:15:09.208 "base_bdevs_list": [ 00:15:09.208 { 00:15:09.208 "name": "NewBaseBdev", 00:15:09.208 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.208 }, 00:15:09.208 { 00:15:09.208 "name": "BaseBdev2", 00:15:09.208 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:09.208 "is_configured": true, 00:15:09.208 "data_offset": 2048, 00:15:09.208 "data_size": 63488 00:15:09.209 }, 00:15:09.209 { 00:15:09.209 "name": "BaseBdev3", 00:15:09.209 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:09.209 "is_configured": true, 00:15:09.209 "data_offset": 2048, 00:15:09.209 "data_size": 63488 00:15:09.209 } 00:15:09.209 ] 00:15:09.209 }' 00:15:09.209 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.209 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.775 [2024-11-20 17:07:33.522598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.775 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.775 "name": "Existed_Raid", 00:15:09.775 "aliases": [ 00:15:09.775 "8a82d671-7633-4c9e-8fa6-7e472b4d1498" 00:15:09.775 ], 00:15:09.775 "product_name": "Raid Volume", 00:15:09.775 "block_size": 512, 00:15:09.775 "num_blocks": 126976, 00:15:09.775 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:09.775 "assigned_rate_limits": { 00:15:09.775 "rw_ios_per_sec": 0, 00:15:09.775 "rw_mbytes_per_sec": 0, 00:15:09.775 "r_mbytes_per_sec": 0, 00:15:09.775 "w_mbytes_per_sec": 0 00:15:09.775 }, 00:15:09.775 "claimed": false, 00:15:09.775 "zoned": false, 00:15:09.775 "supported_io_types": { 00:15:09.775 "read": true, 00:15:09.775 "write": true, 00:15:09.775 "unmap": false, 00:15:09.775 "flush": false, 00:15:09.775 "reset": true, 00:15:09.775 "nvme_admin": false, 00:15:09.775 "nvme_io": false, 00:15:09.775 "nvme_io_md": false, 00:15:09.775 "write_zeroes": true, 00:15:09.775 "zcopy": false, 00:15:09.775 "get_zone_info": false, 00:15:09.775 "zone_management": false, 00:15:09.775 "zone_append": false, 00:15:09.775 "compare": false, 00:15:09.775 "compare_and_write": false, 00:15:09.775 "abort": false, 00:15:09.775 "seek_hole": false, 00:15:09.775 "seek_data": false, 00:15:09.775 "copy": false, 00:15:09.775 "nvme_iov_md": false 00:15:09.775 }, 00:15:09.775 "driver_specific": { 00:15:09.775 "raid": { 00:15:09.775 "uuid": "8a82d671-7633-4c9e-8fa6-7e472b4d1498", 00:15:09.775 "strip_size_kb": 64, 00:15:09.775 "state": "online", 00:15:09.775 "raid_level": "raid5f", 00:15:09.775 "superblock": true, 00:15:09.775 "num_base_bdevs": 3, 00:15:09.775 "num_base_bdevs_discovered": 3, 00:15:09.775 "num_base_bdevs_operational": 3, 00:15:09.775 "base_bdevs_list": [ 00:15:09.775 { 00:15:09.775 "name": "NewBaseBdev", 00:15:09.775 "uuid": "21c8819c-d2b7-4b15-bec6-1f9ca317e0d6", 00:15:09.775 "is_configured": true, 00:15:09.775 "data_offset": 2048, 00:15:09.775 "data_size": 63488 00:15:09.775 }, 00:15:09.775 { 00:15:09.775 "name": "BaseBdev2", 00:15:09.775 "uuid": "398a9d05-7acf-45cc-8a1c-d1cec4489b77", 00:15:09.775 "is_configured": true, 00:15:09.775 "data_offset": 2048, 00:15:09.775 "data_size": 63488 00:15:09.775 }, 00:15:09.775 { 00:15:09.775 "name": "BaseBdev3", 00:15:09.775 "uuid": "97321e90-444b-4b5c-a782-ef8ee390475e", 00:15:09.775 "is_configured": true, 00:15:09.776 "data_offset": 2048, 00:15:09.776 "data_size": 63488 00:15:09.776 } 00:15:09.776 ] 00:15:09.776 } 00:15:09.776 } 00:15:09.776 }' 00:15:09.776 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.776 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:09.776 BaseBdev2 00:15:09.776 BaseBdev3' 00:15:09.776 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.034 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.035 [2024-11-20 17:07:33.850497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.035 [2024-11-20 17:07:33.850664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.035 [2024-11-20 17:07:33.850938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.035 [2024-11-20 17:07:33.851405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.035 [2024-11-20 17:07:33.851582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80631 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80631 ']' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80631 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80631 00:15:10.035 killing process with pid 80631 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80631' 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80631 00:15:10.035 [2024-11-20 17:07:33.892470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.035 17:07:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80631 00:15:10.601 [2024-11-20 17:07:34.166204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.560 ************************************ 00:15:11.560 END TEST raid5f_state_function_test_sb 00:15:11.560 ************************************ 00:15:11.560 17:07:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:11.560 00:15:11.560 real 0m11.916s 00:15:11.560 user 0m19.864s 00:15:11.560 sys 0m1.655s 00:15:11.560 17:07:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.560 17:07:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.560 17:07:35 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:11.560 17:07:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:11.560 17:07:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.560 17:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.560 ************************************ 00:15:11.560 START TEST raid5f_superblock_test 00:15:11.560 ************************************ 00:15:11.560 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:11.560 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:11.560 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:11.560 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81267 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81267 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81267 ']' 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.561 17:07:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.561 [2024-11-20 17:07:35.329301] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:15:11.561 [2024-11-20 17:07:35.329739] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81267 ] 00:15:11.820 [2024-11-20 17:07:35.498467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.820 [2024-11-20 17:07:35.616593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.078 [2024-11-20 17:07:35.810217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.079 [2024-11-20 17:07:35.810567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:12.645 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 malloc1 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 [2024-11-20 17:07:36.398094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.646 [2024-11-20 17:07:36.398388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.646 [2024-11-20 17:07:36.398470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:12.646 [2024-11-20 17:07:36.398720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.646 [2024-11-20 17:07:36.402089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.646 [2024-11-20 17:07:36.402151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.646 pt1 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 malloc2 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 [2024-11-20 17:07:36.448051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.646 [2024-11-20 17:07:36.448328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.646 [2024-11-20 17:07:36.448385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:12.646 [2024-11-20 17:07:36.448400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.646 [2024-11-20 17:07:36.451165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.646 [2024-11-20 17:07:36.451205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.646 pt2 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 malloc3 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.646 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.646 [2024-11-20 17:07:36.509191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.646 [2024-11-20 17:07:36.509421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.646 [2024-11-20 17:07:36.509464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:12.646 [2024-11-20 17:07:36.509480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.646 [2024-11-20 17:07:36.512595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.904 [2024-11-20 17:07:36.512808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.904 pt3 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 [2024-11-20 17:07:36.517222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.904 [2024-11-20 17:07:36.519711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.904 [2024-11-20 17:07:36.519826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.904 [2024-11-20 17:07:36.520107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:12.904 [2024-11-20 17:07:36.520133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.904 [2024-11-20 17:07:36.520445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.904 [2024-11-20 17:07:36.525866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:12.904 [2024-11-20 17:07:36.526058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:12.904 [2024-11-20 17:07:36.526562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.904 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.905 "name": "raid_bdev1", 00:15:12.905 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:12.905 "strip_size_kb": 64, 00:15:12.905 "state": "online", 00:15:12.905 "raid_level": "raid5f", 00:15:12.905 "superblock": true, 00:15:12.905 "num_base_bdevs": 3, 00:15:12.905 "num_base_bdevs_discovered": 3, 00:15:12.905 "num_base_bdevs_operational": 3, 00:15:12.905 "base_bdevs_list": [ 00:15:12.905 { 00:15:12.905 "name": "pt1", 00:15:12.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.905 "is_configured": true, 00:15:12.905 "data_offset": 2048, 00:15:12.905 "data_size": 63488 00:15:12.905 }, 00:15:12.905 { 00:15:12.905 "name": "pt2", 00:15:12.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.905 "is_configured": true, 00:15:12.905 "data_offset": 2048, 00:15:12.905 "data_size": 63488 00:15:12.905 }, 00:15:12.905 { 00:15:12.905 "name": "pt3", 00:15:12.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.905 "is_configured": true, 00:15:12.905 "data_offset": 2048, 00:15:12.905 "data_size": 63488 00:15:12.905 } 00:15:12.905 ] 00:15:12.905 }' 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.905 17:07:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.471 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.472 [2024-11-20 17:07:37.073155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.472 "name": "raid_bdev1", 00:15:13.472 "aliases": [ 00:15:13.472 "3b29c822-3833-4bf1-8200-d3e75e8445e0" 00:15:13.472 ], 00:15:13.472 "product_name": "Raid Volume", 00:15:13.472 "block_size": 512, 00:15:13.472 "num_blocks": 126976, 00:15:13.472 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:13.472 "assigned_rate_limits": { 00:15:13.472 "rw_ios_per_sec": 0, 00:15:13.472 "rw_mbytes_per_sec": 0, 00:15:13.472 "r_mbytes_per_sec": 0, 00:15:13.472 "w_mbytes_per_sec": 0 00:15:13.472 }, 00:15:13.472 "claimed": false, 00:15:13.472 "zoned": false, 00:15:13.472 "supported_io_types": { 00:15:13.472 "read": true, 00:15:13.472 "write": true, 00:15:13.472 "unmap": false, 00:15:13.472 "flush": false, 00:15:13.472 "reset": true, 00:15:13.472 "nvme_admin": false, 00:15:13.472 "nvme_io": false, 00:15:13.472 "nvme_io_md": false, 00:15:13.472 "write_zeroes": true, 00:15:13.472 "zcopy": false, 00:15:13.472 "get_zone_info": false, 00:15:13.472 "zone_management": false, 00:15:13.472 "zone_append": false, 00:15:13.472 "compare": false, 00:15:13.472 "compare_and_write": false, 00:15:13.472 "abort": false, 00:15:13.472 "seek_hole": false, 00:15:13.472 "seek_data": false, 00:15:13.472 "copy": false, 00:15:13.472 "nvme_iov_md": false 00:15:13.472 }, 00:15:13.472 "driver_specific": { 00:15:13.472 "raid": { 00:15:13.472 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:13.472 "strip_size_kb": 64, 00:15:13.472 "state": "online", 00:15:13.472 "raid_level": "raid5f", 00:15:13.472 "superblock": true, 00:15:13.472 "num_base_bdevs": 3, 00:15:13.472 "num_base_bdevs_discovered": 3, 00:15:13.472 "num_base_bdevs_operational": 3, 00:15:13.472 "base_bdevs_list": [ 00:15:13.472 { 00:15:13.472 "name": "pt1", 00:15:13.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.472 "is_configured": true, 00:15:13.472 "data_offset": 2048, 00:15:13.472 "data_size": 63488 00:15:13.472 }, 00:15:13.472 { 00:15:13.472 "name": "pt2", 00:15:13.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.472 "is_configured": true, 00:15:13.472 "data_offset": 2048, 00:15:13.472 "data_size": 63488 00:15:13.472 }, 00:15:13.472 { 00:15:13.472 "name": "pt3", 00:15:13.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.472 "is_configured": true, 00:15:13.472 "data_offset": 2048, 00:15:13.472 "data_size": 63488 00:15:13.472 } 00:15:13.472 ] 00:15:13.472 } 00:15:13.472 } 00:15:13.472 }' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:13.472 pt2 00:15:13.472 pt3' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.472 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:13.731 [2024-11-20 17:07:37.389122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3b29c822-3833-4bf1-8200-d3e75e8445e0 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3b29c822-3833-4bf1-8200-d3e75e8445e0 ']' 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.731 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 [2024-11-20 17:07:37.444960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.732 [2024-11-20 17:07:37.445116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.732 [2024-11-20 17:07:37.445307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.732 [2024-11-20 17:07:37.445532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.732 [2024-11-20 17:07:37.445558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.732 [2024-11-20 17:07:37.589064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:13.732 [2024-11-20 17:07:37.591677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:13.732 [2024-11-20 17:07:37.591752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:13.732 [2024-11-20 17:07:37.591846] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:13.732 [2024-11-20 17:07:37.591915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:13.732 [2024-11-20 17:07:37.591948] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:13.732 [2024-11-20 17:07:37.591985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.732 [2024-11-20 17:07:37.591998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:13.732 request: 00:15:13.732 { 00:15:13.732 "name": "raid_bdev1", 00:15:13.732 "raid_level": "raid5f", 00:15:13.732 "base_bdevs": [ 00:15:13.732 "malloc1", 00:15:13.732 "malloc2", 00:15:13.732 "malloc3" 00:15:13.732 ], 00:15:13.732 "strip_size_kb": 64, 00:15:13.732 "superblock": false, 00:15:13.732 "method": "bdev_raid_create", 00:15:13.732 "req_id": 1 00:15:13.732 } 00:15:13.732 Got JSON-RPC error response 00:15:13.732 response: 00:15:13.732 { 00:15:13.732 "code": -17, 00:15:13.732 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:13.732 } 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:13.732 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.991 [2024-11-20 17:07:37.653011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:13.991 [2024-11-20 17:07:37.653193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.991 [2024-11-20 17:07:37.653267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:13.991 [2024-11-20 17:07:37.653369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.991 [2024-11-20 17:07:37.656262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.991 [2024-11-20 17:07:37.656411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:13.991 [2024-11-20 17:07:37.656627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:13.991 [2024-11-20 17:07:37.656825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:13.991 pt1 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.991 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.991 "name": "raid_bdev1", 00:15:13.991 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:13.991 "strip_size_kb": 64, 00:15:13.991 "state": "configuring", 00:15:13.991 "raid_level": "raid5f", 00:15:13.991 "superblock": true, 00:15:13.991 "num_base_bdevs": 3, 00:15:13.991 "num_base_bdevs_discovered": 1, 00:15:13.992 "num_base_bdevs_operational": 3, 00:15:13.992 "base_bdevs_list": [ 00:15:13.992 { 00:15:13.992 "name": "pt1", 00:15:13.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.992 "is_configured": true, 00:15:13.992 "data_offset": 2048, 00:15:13.992 "data_size": 63488 00:15:13.992 }, 00:15:13.992 { 00:15:13.992 "name": null, 00:15:13.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.992 "is_configured": false, 00:15:13.992 "data_offset": 2048, 00:15:13.992 "data_size": 63488 00:15:13.992 }, 00:15:13.992 { 00:15:13.992 "name": null, 00:15:13.992 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.992 "is_configured": false, 00:15:13.992 "data_offset": 2048, 00:15:13.992 "data_size": 63488 00:15:13.992 } 00:15:13.992 ] 00:15:13.992 }' 00:15:13.992 17:07:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.992 17:07:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:14.558 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 [2024-11-20 17:07:38.153355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.559 [2024-11-20 17:07:38.153586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.559 [2024-11-20 17:07:38.153631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:14.559 [2024-11-20 17:07:38.153648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.559 [2024-11-20 17:07:38.154227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.559 [2024-11-20 17:07:38.154260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.559 [2024-11-20 17:07:38.154375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:14.559 [2024-11-20 17:07:38.154413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.559 pt2 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 [2024-11-20 17:07:38.161353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.559 "name": "raid_bdev1", 00:15:14.559 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:14.559 "strip_size_kb": 64, 00:15:14.559 "state": "configuring", 00:15:14.559 "raid_level": "raid5f", 00:15:14.559 "superblock": true, 00:15:14.559 "num_base_bdevs": 3, 00:15:14.559 "num_base_bdevs_discovered": 1, 00:15:14.559 "num_base_bdevs_operational": 3, 00:15:14.559 "base_bdevs_list": [ 00:15:14.559 { 00:15:14.559 "name": "pt1", 00:15:14.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:14.559 "is_configured": true, 00:15:14.559 "data_offset": 2048, 00:15:14.559 "data_size": 63488 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "name": null, 00:15:14.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.559 "is_configured": false, 00:15:14.559 "data_offset": 0, 00:15:14.559 "data_size": 63488 00:15:14.559 }, 00:15:14.559 { 00:15:14.559 "name": null, 00:15:14.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.559 "is_configured": false, 00:15:14.559 "data_offset": 2048, 00:15:14.559 "data_size": 63488 00:15:14.559 } 00:15:14.559 ] 00:15:14.559 }' 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.559 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.817 [2024-11-20 17:07:38.673510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.817 [2024-11-20 17:07:38.673739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.817 [2024-11-20 17:07:38.673788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:14.817 [2024-11-20 17:07:38.673811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.817 [2024-11-20 17:07:38.674384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.817 [2024-11-20 17:07:38.674414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.817 [2024-11-20 17:07:38.674534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:14.817 [2024-11-20 17:07:38.674569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.817 pt2 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.817 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.075 [2024-11-20 17:07:38.685527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.075 [2024-11-20 17:07:38.685587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.075 [2024-11-20 17:07:38.685609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:15.075 [2024-11-20 17:07:38.685625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.075 [2024-11-20 17:07:38.686110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.075 [2024-11-20 17:07:38.686152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.075 [2024-11-20 17:07:38.686227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:15.075 [2024-11-20 17:07:38.686259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.075 [2024-11-20 17:07:38.686414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:15.075 [2024-11-20 17:07:38.686439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.075 [2024-11-20 17:07:38.686791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:15.075 pt3 00:15:15.076 [2024-11-20 17:07:38.691827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:15.076 [2024-11-20 17:07:38.691852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:15.076 [2024-11-20 17:07:38.692090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.076 "name": "raid_bdev1", 00:15:15.076 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:15.076 "strip_size_kb": 64, 00:15:15.076 "state": "online", 00:15:15.076 "raid_level": "raid5f", 00:15:15.076 "superblock": true, 00:15:15.076 "num_base_bdevs": 3, 00:15:15.076 "num_base_bdevs_discovered": 3, 00:15:15.076 "num_base_bdevs_operational": 3, 00:15:15.076 "base_bdevs_list": [ 00:15:15.076 { 00:15:15.076 "name": "pt1", 00:15:15.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.076 "is_configured": true, 00:15:15.076 "data_offset": 2048, 00:15:15.076 "data_size": 63488 00:15:15.076 }, 00:15:15.076 { 00:15:15.076 "name": "pt2", 00:15:15.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.076 "is_configured": true, 00:15:15.076 "data_offset": 2048, 00:15:15.076 "data_size": 63488 00:15:15.076 }, 00:15:15.076 { 00:15:15.076 "name": "pt3", 00:15:15.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.076 "is_configured": true, 00:15:15.076 "data_offset": 2048, 00:15:15.076 "data_size": 63488 00:15:15.076 } 00:15:15.076 ] 00:15:15.076 }' 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.076 17:07:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.642 [2024-11-20 17:07:39.222304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.642 "name": "raid_bdev1", 00:15:15.642 "aliases": [ 00:15:15.642 "3b29c822-3833-4bf1-8200-d3e75e8445e0" 00:15:15.642 ], 00:15:15.642 "product_name": "Raid Volume", 00:15:15.642 "block_size": 512, 00:15:15.642 "num_blocks": 126976, 00:15:15.642 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:15.642 "assigned_rate_limits": { 00:15:15.642 "rw_ios_per_sec": 0, 00:15:15.642 "rw_mbytes_per_sec": 0, 00:15:15.642 "r_mbytes_per_sec": 0, 00:15:15.642 "w_mbytes_per_sec": 0 00:15:15.642 }, 00:15:15.642 "claimed": false, 00:15:15.642 "zoned": false, 00:15:15.642 "supported_io_types": { 00:15:15.642 "read": true, 00:15:15.642 "write": true, 00:15:15.642 "unmap": false, 00:15:15.642 "flush": false, 00:15:15.642 "reset": true, 00:15:15.642 "nvme_admin": false, 00:15:15.642 "nvme_io": false, 00:15:15.642 "nvme_io_md": false, 00:15:15.642 "write_zeroes": true, 00:15:15.642 "zcopy": false, 00:15:15.642 "get_zone_info": false, 00:15:15.642 "zone_management": false, 00:15:15.642 "zone_append": false, 00:15:15.642 "compare": false, 00:15:15.642 "compare_and_write": false, 00:15:15.642 "abort": false, 00:15:15.642 "seek_hole": false, 00:15:15.642 "seek_data": false, 00:15:15.642 "copy": false, 00:15:15.642 "nvme_iov_md": false 00:15:15.642 }, 00:15:15.642 "driver_specific": { 00:15:15.642 "raid": { 00:15:15.642 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:15.642 "strip_size_kb": 64, 00:15:15.642 "state": "online", 00:15:15.642 "raid_level": "raid5f", 00:15:15.642 "superblock": true, 00:15:15.642 "num_base_bdevs": 3, 00:15:15.642 "num_base_bdevs_discovered": 3, 00:15:15.642 "num_base_bdevs_operational": 3, 00:15:15.642 "base_bdevs_list": [ 00:15:15.642 { 00:15:15.642 "name": "pt1", 00:15:15.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.642 "is_configured": true, 00:15:15.642 "data_offset": 2048, 00:15:15.642 "data_size": 63488 00:15:15.642 }, 00:15:15.642 { 00:15:15.642 "name": "pt2", 00:15:15.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.642 "is_configured": true, 00:15:15.642 "data_offset": 2048, 00:15:15.642 "data_size": 63488 00:15:15.642 }, 00:15:15.642 { 00:15:15.642 "name": "pt3", 00:15:15.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.642 "is_configured": true, 00:15:15.642 "data_offset": 2048, 00:15:15.642 "data_size": 63488 00:15:15.642 } 00:15:15.642 ] 00:15:15.642 } 00:15:15.642 } 00:15:15.642 }' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:15.642 pt2 00:15:15.642 pt3' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.642 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.643 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.901 [2024-11-20 17:07:39.542302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3b29c822-3833-4bf1-8200-d3e75e8445e0 '!=' 3b29c822-3833-4bf1-8200-d3e75e8445e0 ']' 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.901 [2024-11-20 17:07:39.590132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.901 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.901 "name": "raid_bdev1", 00:15:15.901 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:15.901 "strip_size_kb": 64, 00:15:15.901 "state": "online", 00:15:15.901 "raid_level": "raid5f", 00:15:15.901 "superblock": true, 00:15:15.901 "num_base_bdevs": 3, 00:15:15.901 "num_base_bdevs_discovered": 2, 00:15:15.902 "num_base_bdevs_operational": 2, 00:15:15.902 "base_bdevs_list": [ 00:15:15.902 { 00:15:15.902 "name": null, 00:15:15.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.902 "is_configured": false, 00:15:15.902 "data_offset": 0, 00:15:15.902 "data_size": 63488 00:15:15.902 }, 00:15:15.902 { 00:15:15.902 "name": "pt2", 00:15:15.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.902 "is_configured": true, 00:15:15.902 "data_offset": 2048, 00:15:15.902 "data_size": 63488 00:15:15.902 }, 00:15:15.902 { 00:15:15.902 "name": "pt3", 00:15:15.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.902 "is_configured": true, 00:15:15.902 "data_offset": 2048, 00:15:15.902 "data_size": 63488 00:15:15.902 } 00:15:15.902 ] 00:15:15.902 }' 00:15:15.902 17:07:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.902 17:07:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 [2024-11-20 17:07:40.134294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.468 [2024-11-20 17:07:40.134448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.468 [2024-11-20 17:07:40.134598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.468 [2024-11-20 17:07:40.134669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.468 [2024-11-20 17:07:40.134690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 [2024-11-20 17:07:40.218304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.468 [2024-11-20 17:07:40.218491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.468 [2024-11-20 17:07:40.218568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:16.468 [2024-11-20 17:07:40.218719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.468 [2024-11-20 17:07:40.221762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.468 [2024-11-20 17:07:40.221829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.468 [2024-11-20 17:07:40.221910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:16.468 [2024-11-20 17:07:40.221965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.468 pt2 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.468 "name": "raid_bdev1", 00:15:16.468 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:16.468 "strip_size_kb": 64, 00:15:16.468 "state": "configuring", 00:15:16.468 "raid_level": "raid5f", 00:15:16.468 "superblock": true, 00:15:16.468 "num_base_bdevs": 3, 00:15:16.468 "num_base_bdevs_discovered": 1, 00:15:16.468 "num_base_bdevs_operational": 2, 00:15:16.468 "base_bdevs_list": [ 00:15:16.468 { 00:15:16.468 "name": null, 00:15:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.468 "is_configured": false, 00:15:16.468 "data_offset": 2048, 00:15:16.468 "data_size": 63488 00:15:16.468 }, 00:15:16.468 { 00:15:16.468 "name": "pt2", 00:15:16.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.468 "is_configured": true, 00:15:16.468 "data_offset": 2048, 00:15:16.468 "data_size": 63488 00:15:16.468 }, 00:15:16.468 { 00:15:16.468 "name": null, 00:15:16.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.468 "is_configured": false, 00:15:16.468 "data_offset": 2048, 00:15:16.468 "data_size": 63488 00:15:16.468 } 00:15:16.468 ] 00:15:16.468 }' 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.468 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.035 [2024-11-20 17:07:40.746514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:17.035 [2024-11-20 17:07:40.746612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.035 [2024-11-20 17:07:40.746644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:17.035 [2024-11-20 17:07:40.746672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.035 [2024-11-20 17:07:40.747296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.035 [2024-11-20 17:07:40.747335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:17.035 [2024-11-20 17:07:40.747435] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:17.035 [2024-11-20 17:07:40.747474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:17.035 [2024-11-20 17:07:40.747634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:17.035 [2024-11-20 17:07:40.747655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:17.035 [2024-11-20 17:07:40.748006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:17.035 pt3 00:15:17.035 [2024-11-20 17:07:40.753078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:17.035 [2024-11-20 17:07:40.753103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:17.035 [2024-11-20 17:07:40.753462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.035 "name": "raid_bdev1", 00:15:17.035 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:17.035 "strip_size_kb": 64, 00:15:17.035 "state": "online", 00:15:17.035 "raid_level": "raid5f", 00:15:17.035 "superblock": true, 00:15:17.035 "num_base_bdevs": 3, 00:15:17.035 "num_base_bdevs_discovered": 2, 00:15:17.035 "num_base_bdevs_operational": 2, 00:15:17.035 "base_bdevs_list": [ 00:15:17.035 { 00:15:17.035 "name": null, 00:15:17.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.035 "is_configured": false, 00:15:17.035 "data_offset": 2048, 00:15:17.035 "data_size": 63488 00:15:17.035 }, 00:15:17.035 { 00:15:17.035 "name": "pt2", 00:15:17.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.035 "is_configured": true, 00:15:17.035 "data_offset": 2048, 00:15:17.035 "data_size": 63488 00:15:17.035 }, 00:15:17.035 { 00:15:17.035 "name": "pt3", 00:15:17.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.035 "is_configured": true, 00:15:17.035 "data_offset": 2048, 00:15:17.035 "data_size": 63488 00:15:17.035 } 00:15:17.035 ] 00:15:17.035 }' 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.035 17:07:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.602 [2024-11-20 17:07:41.279244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.602 [2024-11-20 17:07:41.279281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.602 [2024-11-20 17:07:41.279364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.602 [2024-11-20 17:07:41.279442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.602 [2024-11-20 17:07:41.279456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:17.602 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.603 [2024-11-20 17:07:41.355258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.603 [2024-11-20 17:07:41.355326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.603 [2024-11-20 17:07:41.355354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:17.603 [2024-11-20 17:07:41.355368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.603 [2024-11-20 17:07:41.358436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.603 [2024-11-20 17:07:41.358623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.603 [2024-11-20 17:07:41.358847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:17.603 [2024-11-20 17:07:41.359011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.603 [2024-11-20 17:07:41.359306] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater pt1 00:15:17.603 than existing raid bdev raid_bdev1 (2) 00:15:17.603 [2024-11-20 17:07:41.359428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.603 [2024-11-20 17:07:41.359465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:17.603 [2024-11-20 17:07:41.359537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.603 "name": "raid_bdev1", 00:15:17.603 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:17.603 "strip_size_kb": 64, 00:15:17.603 "state": "configuring", 00:15:17.603 "raid_level": "raid5f", 00:15:17.603 "superblock": true, 00:15:17.603 "num_base_bdevs": 3, 00:15:17.603 "num_base_bdevs_discovered": 1, 00:15:17.603 "num_base_bdevs_operational": 2, 00:15:17.603 "base_bdevs_list": [ 00:15:17.603 { 00:15:17.603 "name": null, 00:15:17.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.603 "is_configured": false, 00:15:17.603 "data_offset": 2048, 00:15:17.603 "data_size": 63488 00:15:17.603 }, 00:15:17.603 { 00:15:17.603 "name": "pt2", 00:15:17.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.603 "is_configured": true, 00:15:17.603 "data_offset": 2048, 00:15:17.603 "data_size": 63488 00:15:17.603 }, 00:15:17.603 { 00:15:17.603 "name": null, 00:15:17.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.603 "is_configured": false, 00:15:17.603 "data_offset": 2048, 00:15:17.603 "data_size": 63488 00:15:17.603 } 00:15:17.603 ] 00:15:17.603 }' 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.603 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.170 [2024-11-20 17:07:41.955558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.170 [2024-11-20 17:07:41.955796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.170 [2024-11-20 17:07:41.955875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:18.170 [2024-11-20 17:07:41.956071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.170 [2024-11-20 17:07:41.956671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.170 [2024-11-20 17:07:41.956695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.170 [2024-11-20 17:07:41.956962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.170 [2024-11-20 17:07:41.957034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.170 [2024-11-20 17:07:41.957334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:18.170 [2024-11-20 17:07:41.957475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:18.170 [2024-11-20 17:07:41.957862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:18.170 [2024-11-20 17:07:41.963228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:18.170 pt3 00:15:18.170 [2024-11-20 17:07:41.963397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:18.170 [2024-11-20 17:07:41.963770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.170 17:07:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.170 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.170 "name": "raid_bdev1", 00:15:18.170 "uuid": "3b29c822-3833-4bf1-8200-d3e75e8445e0", 00:15:18.170 "strip_size_kb": 64, 00:15:18.170 "state": "online", 00:15:18.170 "raid_level": "raid5f", 00:15:18.170 "superblock": true, 00:15:18.170 "num_base_bdevs": 3, 00:15:18.170 "num_base_bdevs_discovered": 2, 00:15:18.170 "num_base_bdevs_operational": 2, 00:15:18.170 "base_bdevs_list": [ 00:15:18.170 { 00:15:18.170 "name": null, 00:15:18.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.170 "is_configured": false, 00:15:18.170 "data_offset": 2048, 00:15:18.170 "data_size": 63488 00:15:18.170 }, 00:15:18.170 { 00:15:18.170 "name": "pt2", 00:15:18.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.170 "is_configured": true, 00:15:18.170 "data_offset": 2048, 00:15:18.170 "data_size": 63488 00:15:18.170 }, 00:15:18.170 { 00:15:18.170 "name": "pt3", 00:15:18.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.170 "is_configured": true, 00:15:18.170 "data_offset": 2048, 00:15:18.170 "data_size": 63488 00:15:18.170 } 00:15:18.170 ] 00:15:18.170 }' 00:15:18.170 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.170 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:18.736 [2024-11-20 17:07:42.537865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3b29c822-3833-4bf1-8200-d3e75e8445e0 '!=' 3b29c822-3833-4bf1-8200-d3e75e8445e0 ']' 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81267 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81267 ']' 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81267 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.736 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81267 00:15:18.995 killing process with pid 81267 00:15:18.995 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.995 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.995 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81267' 00:15:18.995 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81267 00:15:18.995 [2024-11-20 17:07:42.615608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.995 17:07:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81267 00:15:18.995 [2024-11-20 17:07:42.615707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.995 [2024-11-20 17:07:42.615800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.995 [2024-11-20 17:07:42.615821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:19.252 [2024-11-20 17:07:42.865954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.193 17:07:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:20.193 ************************************ 00:15:20.193 END TEST raid5f_superblock_test 00:15:20.193 ************************************ 00:15:20.193 00:15:20.193 real 0m8.674s 00:15:20.193 user 0m14.236s 00:15:20.193 sys 0m1.227s 00:15:20.193 17:07:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.193 17:07:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.193 17:07:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:20.193 17:07:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:20.193 17:07:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:20.193 17:07:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.193 17:07:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.193 ************************************ 00:15:20.193 START TEST raid5f_rebuild_test 00:15:20.193 ************************************ 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81711 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81711 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81711 ']' 00:15:20.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.193 17:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.451 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:20.451 Zero copy mechanism will not be used. 00:15:20.451 [2024-11-20 17:07:44.085305] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:15:20.451 [2024-11-20 17:07:44.085521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81711 ] 00:15:20.451 [2024-11-20 17:07:44.269224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.708 [2024-11-20 17:07:44.404266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.966 [2024-11-20 17:07:44.619516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.966 [2024-11-20 17:07:44.619596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.225 BaseBdev1_malloc 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.225 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.225 [2024-11-20 17:07:45.088174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:21.225 [2024-11-20 17:07:45.088382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.225 [2024-11-20 17:07:45.088428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:21.225 [2024-11-20 17:07:45.088447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.225 [2024-11-20 17:07:45.091375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.225 [2024-11-20 17:07:45.091599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:21.484 BaseBdev1 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 BaseBdev2_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 [2024-11-20 17:07:45.142868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:21.484 [2024-11-20 17:07:45.143144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.484 [2024-11-20 17:07:45.143219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:21.484 [2024-11-20 17:07:45.143243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.484 [2024-11-20 17:07:45.146253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.484 [2024-11-20 17:07:45.146305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:21.484 BaseBdev2 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 BaseBdev3_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 [2024-11-20 17:07:45.221889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:21.484 [2024-11-20 17:07:45.222114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.484 [2024-11-20 17:07:45.222203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:21.484 [2024-11-20 17:07:45.222404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.484 [2024-11-20 17:07:45.225912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.484 [2024-11-20 17:07:45.225974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:21.484 BaseBdev3 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 spare_malloc 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 spare_delay 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.484 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.484 [2024-11-20 17:07:45.291477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:21.484 [2024-11-20 17:07:45.291742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.485 [2024-11-20 17:07:45.291921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:21.485 [2024-11-20 17:07:45.291959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.485 [2024-11-20 17:07:45.295479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.485 [2024-11-20 17:07:45.295542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.485 spare 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.485 [2024-11-20 17:07:45.299846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.485 [2024-11-20 17:07:45.302747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.485 [2024-11-20 17:07:45.302851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.485 [2024-11-20 17:07:45.303000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:21.485 [2024-11-20 17:07:45.303019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:21.485 [2024-11-20 17:07:45.303348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:21.485 [2024-11-20 17:07:45.308721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:21.485 [2024-11-20 17:07:45.308749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:21.485 [2024-11-20 17:07:45.309037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.485 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.743 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.743 "name": "raid_bdev1", 00:15:21.743 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:21.743 "strip_size_kb": 64, 00:15:21.743 "state": "online", 00:15:21.743 "raid_level": "raid5f", 00:15:21.743 "superblock": false, 00:15:21.743 "num_base_bdevs": 3, 00:15:21.743 "num_base_bdevs_discovered": 3, 00:15:21.743 "num_base_bdevs_operational": 3, 00:15:21.743 "base_bdevs_list": [ 00:15:21.743 { 00:15:21.743 "name": "BaseBdev1", 00:15:21.743 "uuid": "708c16b8-87ef-583d-af02-2e727b6815fe", 00:15:21.743 "is_configured": true, 00:15:21.743 "data_offset": 0, 00:15:21.743 "data_size": 65536 00:15:21.743 }, 00:15:21.743 { 00:15:21.743 "name": "BaseBdev2", 00:15:21.743 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:21.743 "is_configured": true, 00:15:21.743 "data_offset": 0, 00:15:21.743 "data_size": 65536 00:15:21.743 }, 00:15:21.743 { 00:15:21.743 "name": "BaseBdev3", 00:15:21.743 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:21.743 "is_configured": true, 00:15:21.743 "data_offset": 0, 00:15:21.743 "data_size": 65536 00:15:21.743 } 00:15:21.743 ] 00:15:21.743 }' 00:15:21.743 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.743 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.002 [2024-11-20 17:07:45.799336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.002 17:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.260 17:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:22.519 [2024-11-20 17:07:46.163283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:22.519 /dev/nbd0 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.519 1+0 records in 00:15:22.519 1+0 records out 00:15:22.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267944 s, 15.3 MB/s 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:22.519 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:23.085 512+0 records in 00:15:23.085 512+0 records out 00:15:23.085 67108864 bytes (67 MB, 64 MiB) copied, 0.450097 s, 149 MB/s 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.085 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.344 [2024-11-20 17:07:46.955891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.345 [2024-11-20 17:07:46.985838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.345 17:07:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.345 17:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.345 17:07:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.345 "name": "raid_bdev1", 00:15:23.345 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:23.345 "strip_size_kb": 64, 00:15:23.345 "state": "online", 00:15:23.345 "raid_level": "raid5f", 00:15:23.345 "superblock": false, 00:15:23.345 "num_base_bdevs": 3, 00:15:23.345 "num_base_bdevs_discovered": 2, 00:15:23.345 "num_base_bdevs_operational": 2, 00:15:23.345 "base_bdevs_list": [ 00:15:23.345 { 00:15:23.345 "name": null, 00:15:23.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.345 "is_configured": false, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 65536 00:15:23.345 }, 00:15:23.345 { 00:15:23.345 "name": "BaseBdev2", 00:15:23.345 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:23.345 "is_configured": true, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 65536 00:15:23.345 }, 00:15:23.345 { 00:15:23.345 "name": "BaseBdev3", 00:15:23.345 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:23.345 "is_configured": true, 00:15:23.345 "data_offset": 0, 00:15:23.345 "data_size": 65536 00:15:23.345 } 00:15:23.345 ] 00:15:23.345 }' 00:15:23.345 17:07:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.345 17:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.911 17:07:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.911 17:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.911 17:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.911 [2024-11-20 17:07:47.486051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.911 [2024-11-20 17:07:47.502511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:23.911 17:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.911 17:07:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:23.911 [2024-11-20 17:07:47.510086] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.846 "name": "raid_bdev1", 00:15:24.846 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:24.846 "strip_size_kb": 64, 00:15:24.846 "state": "online", 00:15:24.846 "raid_level": "raid5f", 00:15:24.846 "superblock": false, 00:15:24.846 "num_base_bdevs": 3, 00:15:24.846 "num_base_bdevs_discovered": 3, 00:15:24.846 "num_base_bdevs_operational": 3, 00:15:24.846 "process": { 00:15:24.846 "type": "rebuild", 00:15:24.846 "target": "spare", 00:15:24.846 "progress": { 00:15:24.846 "blocks": 18432, 00:15:24.846 "percent": 14 00:15:24.846 } 00:15:24.846 }, 00:15:24.846 "base_bdevs_list": [ 00:15:24.846 { 00:15:24.846 "name": "spare", 00:15:24.846 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:24.846 "is_configured": true, 00:15:24.846 "data_offset": 0, 00:15:24.846 "data_size": 65536 00:15:24.846 }, 00:15:24.846 { 00:15:24.846 "name": "BaseBdev2", 00:15:24.846 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:24.846 "is_configured": true, 00:15:24.846 "data_offset": 0, 00:15:24.846 "data_size": 65536 00:15:24.846 }, 00:15:24.846 { 00:15:24.846 "name": "BaseBdev3", 00:15:24.846 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:24.846 "is_configured": true, 00:15:24.846 "data_offset": 0, 00:15:24.846 "data_size": 65536 00:15:24.846 } 00:15:24.846 ] 00:15:24.846 }' 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.846 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.846 [2024-11-20 17:07:48.680285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.103 [2024-11-20 17:07:48.724331] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.104 [2024-11-20 17:07:48.724449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.104 [2024-11-20 17:07:48.724504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.104 [2024-11-20 17:07:48.724525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.104 "name": "raid_bdev1", 00:15:25.104 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:25.104 "strip_size_kb": 64, 00:15:25.104 "state": "online", 00:15:25.104 "raid_level": "raid5f", 00:15:25.104 "superblock": false, 00:15:25.104 "num_base_bdevs": 3, 00:15:25.104 "num_base_bdevs_discovered": 2, 00:15:25.104 "num_base_bdevs_operational": 2, 00:15:25.104 "base_bdevs_list": [ 00:15:25.104 { 00:15:25.104 "name": null, 00:15:25.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.104 "is_configured": false, 00:15:25.104 "data_offset": 0, 00:15:25.104 "data_size": 65536 00:15:25.104 }, 00:15:25.104 { 00:15:25.104 "name": "BaseBdev2", 00:15:25.104 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:25.104 "is_configured": true, 00:15:25.104 "data_offset": 0, 00:15:25.104 "data_size": 65536 00:15:25.104 }, 00:15:25.104 { 00:15:25.104 "name": "BaseBdev3", 00:15:25.104 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:25.104 "is_configured": true, 00:15:25.104 "data_offset": 0, 00:15:25.104 "data_size": 65536 00:15:25.104 } 00:15:25.104 ] 00:15:25.104 }' 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.104 17:07:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.670 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.670 "name": "raid_bdev1", 00:15:25.670 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:25.670 "strip_size_kb": 64, 00:15:25.670 "state": "online", 00:15:25.670 "raid_level": "raid5f", 00:15:25.670 "superblock": false, 00:15:25.670 "num_base_bdevs": 3, 00:15:25.670 "num_base_bdevs_discovered": 2, 00:15:25.670 "num_base_bdevs_operational": 2, 00:15:25.670 "base_bdevs_list": [ 00:15:25.670 { 00:15:25.670 "name": null, 00:15:25.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.670 "is_configured": false, 00:15:25.670 "data_offset": 0, 00:15:25.670 "data_size": 65536 00:15:25.670 }, 00:15:25.670 { 00:15:25.670 "name": "BaseBdev2", 00:15:25.670 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:25.671 "is_configured": true, 00:15:25.671 "data_offset": 0, 00:15:25.671 "data_size": 65536 00:15:25.671 }, 00:15:25.671 { 00:15:25.671 "name": "BaseBdev3", 00:15:25.671 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:25.671 "is_configured": true, 00:15:25.671 "data_offset": 0, 00:15:25.671 "data_size": 65536 00:15:25.671 } 00:15:25.671 ] 00:15:25.671 }' 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.671 [2024-11-20 17:07:49.460496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.671 [2024-11-20 17:07:49.475988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.671 17:07:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:25.671 [2024-11-20 17:07:49.483897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.045 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.046 "name": "raid_bdev1", 00:15:27.046 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:27.046 "strip_size_kb": 64, 00:15:27.046 "state": "online", 00:15:27.046 "raid_level": "raid5f", 00:15:27.046 "superblock": false, 00:15:27.046 "num_base_bdevs": 3, 00:15:27.046 "num_base_bdevs_discovered": 3, 00:15:27.046 "num_base_bdevs_operational": 3, 00:15:27.046 "process": { 00:15:27.046 "type": "rebuild", 00:15:27.046 "target": "spare", 00:15:27.046 "progress": { 00:15:27.046 "blocks": 18432, 00:15:27.046 "percent": 14 00:15:27.046 } 00:15:27.046 }, 00:15:27.046 "base_bdevs_list": [ 00:15:27.046 { 00:15:27.046 "name": "spare", 00:15:27.046 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 }, 00:15:27.046 { 00:15:27.046 "name": "BaseBdev2", 00:15:27.046 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 }, 00:15:27.046 { 00:15:27.046 "name": "BaseBdev3", 00:15:27.046 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 } 00:15:27.046 ] 00:15:27.046 }' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=586 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.046 "name": "raid_bdev1", 00:15:27.046 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:27.046 "strip_size_kb": 64, 00:15:27.046 "state": "online", 00:15:27.046 "raid_level": "raid5f", 00:15:27.046 "superblock": false, 00:15:27.046 "num_base_bdevs": 3, 00:15:27.046 "num_base_bdevs_discovered": 3, 00:15:27.046 "num_base_bdevs_operational": 3, 00:15:27.046 "process": { 00:15:27.046 "type": "rebuild", 00:15:27.046 "target": "spare", 00:15:27.046 "progress": { 00:15:27.046 "blocks": 22528, 00:15:27.046 "percent": 17 00:15:27.046 } 00:15:27.046 }, 00:15:27.046 "base_bdevs_list": [ 00:15:27.046 { 00:15:27.046 "name": "spare", 00:15:27.046 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 }, 00:15:27.046 { 00:15:27.046 "name": "BaseBdev2", 00:15:27.046 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 }, 00:15:27.046 { 00:15:27.046 "name": "BaseBdev3", 00:15:27.046 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:27.046 "is_configured": true, 00:15:27.046 "data_offset": 0, 00:15:27.046 "data_size": 65536 00:15:27.046 } 00:15:27.046 ] 00:15:27.046 }' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.046 17:07:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.981 17:07:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.240 "name": "raid_bdev1", 00:15:28.240 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:28.240 "strip_size_kb": 64, 00:15:28.240 "state": "online", 00:15:28.240 "raid_level": "raid5f", 00:15:28.240 "superblock": false, 00:15:28.240 "num_base_bdevs": 3, 00:15:28.240 "num_base_bdevs_discovered": 3, 00:15:28.240 "num_base_bdevs_operational": 3, 00:15:28.240 "process": { 00:15:28.240 "type": "rebuild", 00:15:28.240 "target": "spare", 00:15:28.240 "progress": { 00:15:28.240 "blocks": 47104, 00:15:28.240 "percent": 35 00:15:28.240 } 00:15:28.240 }, 00:15:28.240 "base_bdevs_list": [ 00:15:28.240 { 00:15:28.240 "name": "spare", 00:15:28.240 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:28.240 "is_configured": true, 00:15:28.240 "data_offset": 0, 00:15:28.240 "data_size": 65536 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "name": "BaseBdev2", 00:15:28.240 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:28.240 "is_configured": true, 00:15:28.240 "data_offset": 0, 00:15:28.240 "data_size": 65536 00:15:28.240 }, 00:15:28.240 { 00:15:28.240 "name": "BaseBdev3", 00:15:28.240 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:28.240 "is_configured": true, 00:15:28.240 "data_offset": 0, 00:15:28.240 "data_size": 65536 00:15:28.240 } 00:15:28.240 ] 00:15:28.240 }' 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.240 17:07:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.175 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.176 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.176 17:07:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.176 17:07:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.176 17:07:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.176 17:07:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.176 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.176 "name": "raid_bdev1", 00:15:29.176 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:29.176 "strip_size_kb": 64, 00:15:29.176 "state": "online", 00:15:29.176 "raid_level": "raid5f", 00:15:29.176 "superblock": false, 00:15:29.176 "num_base_bdevs": 3, 00:15:29.176 "num_base_bdevs_discovered": 3, 00:15:29.176 "num_base_bdevs_operational": 3, 00:15:29.176 "process": { 00:15:29.176 "type": "rebuild", 00:15:29.176 "target": "spare", 00:15:29.176 "progress": { 00:15:29.176 "blocks": 69632, 00:15:29.176 "percent": 53 00:15:29.176 } 00:15:29.176 }, 00:15:29.176 "base_bdevs_list": [ 00:15:29.176 { 00:15:29.176 "name": "spare", 00:15:29.176 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:29.176 "is_configured": true, 00:15:29.176 "data_offset": 0, 00:15:29.176 "data_size": 65536 00:15:29.176 }, 00:15:29.176 { 00:15:29.176 "name": "BaseBdev2", 00:15:29.176 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:29.176 "is_configured": true, 00:15:29.176 "data_offset": 0, 00:15:29.176 "data_size": 65536 00:15:29.176 }, 00:15:29.176 { 00:15:29.176 "name": "BaseBdev3", 00:15:29.176 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:29.176 "is_configured": true, 00:15:29.176 "data_offset": 0, 00:15:29.176 "data_size": 65536 00:15:29.176 } 00:15:29.176 ] 00:15:29.176 }' 00:15:29.176 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.435 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.435 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.435 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.435 17:07:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.371 "name": "raid_bdev1", 00:15:30.371 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:30.371 "strip_size_kb": 64, 00:15:30.371 "state": "online", 00:15:30.371 "raid_level": "raid5f", 00:15:30.371 "superblock": false, 00:15:30.371 "num_base_bdevs": 3, 00:15:30.371 "num_base_bdevs_discovered": 3, 00:15:30.371 "num_base_bdevs_operational": 3, 00:15:30.371 "process": { 00:15:30.371 "type": "rebuild", 00:15:30.371 "target": "spare", 00:15:30.371 "progress": { 00:15:30.371 "blocks": 94208, 00:15:30.371 "percent": 71 00:15:30.371 } 00:15:30.371 }, 00:15:30.371 "base_bdevs_list": [ 00:15:30.371 { 00:15:30.371 "name": "spare", 00:15:30.371 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:30.371 "is_configured": true, 00:15:30.371 "data_offset": 0, 00:15:30.371 "data_size": 65536 00:15:30.371 }, 00:15:30.371 { 00:15:30.371 "name": "BaseBdev2", 00:15:30.371 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:30.371 "is_configured": true, 00:15:30.371 "data_offset": 0, 00:15:30.371 "data_size": 65536 00:15:30.371 }, 00:15:30.371 { 00:15:30.371 "name": "BaseBdev3", 00:15:30.371 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:30.371 "is_configured": true, 00:15:30.371 "data_offset": 0, 00:15:30.371 "data_size": 65536 00:15:30.371 } 00:15:30.371 ] 00:15:30.371 }' 00:15:30.371 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.629 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.629 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.629 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.629 17:07:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.564 "name": "raid_bdev1", 00:15:31.564 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:31.564 "strip_size_kb": 64, 00:15:31.564 "state": "online", 00:15:31.564 "raid_level": "raid5f", 00:15:31.564 "superblock": false, 00:15:31.564 "num_base_bdevs": 3, 00:15:31.564 "num_base_bdevs_discovered": 3, 00:15:31.564 "num_base_bdevs_operational": 3, 00:15:31.564 "process": { 00:15:31.564 "type": "rebuild", 00:15:31.564 "target": "spare", 00:15:31.564 "progress": { 00:15:31.564 "blocks": 116736, 00:15:31.564 "percent": 89 00:15:31.564 } 00:15:31.564 }, 00:15:31.564 "base_bdevs_list": [ 00:15:31.564 { 00:15:31.564 "name": "spare", 00:15:31.564 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:31.564 "is_configured": true, 00:15:31.564 "data_offset": 0, 00:15:31.564 "data_size": 65536 00:15:31.564 }, 00:15:31.564 { 00:15:31.564 "name": "BaseBdev2", 00:15:31.564 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:31.564 "is_configured": true, 00:15:31.564 "data_offset": 0, 00:15:31.564 "data_size": 65536 00:15:31.564 }, 00:15:31.564 { 00:15:31.564 "name": "BaseBdev3", 00:15:31.564 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:31.564 "is_configured": true, 00:15:31.564 "data_offset": 0, 00:15:31.564 "data_size": 65536 00:15:31.564 } 00:15:31.564 ] 00:15:31.564 }' 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.564 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.822 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.822 17:07:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.387 [2024-11-20 17:07:55.963444] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:32.388 [2024-11-20 17:07:55.963561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:32.388 [2024-11-20 17:07:55.963640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.646 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.905 "name": "raid_bdev1", 00:15:32.905 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:32.905 "strip_size_kb": 64, 00:15:32.905 "state": "online", 00:15:32.905 "raid_level": "raid5f", 00:15:32.905 "superblock": false, 00:15:32.905 "num_base_bdevs": 3, 00:15:32.905 "num_base_bdevs_discovered": 3, 00:15:32.905 "num_base_bdevs_operational": 3, 00:15:32.905 "base_bdevs_list": [ 00:15:32.905 { 00:15:32.905 "name": "spare", 00:15:32.905 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 }, 00:15:32.905 { 00:15:32.905 "name": "BaseBdev2", 00:15:32.905 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 }, 00:15:32.905 { 00:15:32.905 "name": "BaseBdev3", 00:15:32.905 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 } 00:15:32.905 ] 00:15:32.905 }' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.905 "name": "raid_bdev1", 00:15:32.905 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:32.905 "strip_size_kb": 64, 00:15:32.905 "state": "online", 00:15:32.905 "raid_level": "raid5f", 00:15:32.905 "superblock": false, 00:15:32.905 "num_base_bdevs": 3, 00:15:32.905 "num_base_bdevs_discovered": 3, 00:15:32.905 "num_base_bdevs_operational": 3, 00:15:32.905 "base_bdevs_list": [ 00:15:32.905 { 00:15:32.905 "name": "spare", 00:15:32.905 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 }, 00:15:32.905 { 00:15:32.905 "name": "BaseBdev2", 00:15:32.905 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 }, 00:15:32.905 { 00:15:32.905 "name": "BaseBdev3", 00:15:32.905 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:32.905 "is_configured": true, 00:15:32.905 "data_offset": 0, 00:15:32.905 "data_size": 65536 00:15:32.905 } 00:15:32.905 ] 00:15:32.905 }' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.905 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.190 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.191 "name": "raid_bdev1", 00:15:33.191 "uuid": "ec011c06-63b1-449b-8907-9215c98ad77c", 00:15:33.191 "strip_size_kb": 64, 00:15:33.191 "state": "online", 00:15:33.191 "raid_level": "raid5f", 00:15:33.191 "superblock": false, 00:15:33.191 "num_base_bdevs": 3, 00:15:33.191 "num_base_bdevs_discovered": 3, 00:15:33.191 "num_base_bdevs_operational": 3, 00:15:33.191 "base_bdevs_list": [ 00:15:33.191 { 00:15:33.191 "name": "spare", 00:15:33.191 "uuid": "eb47cccc-8b10-5ff3-a698-77a35d0900f7", 00:15:33.191 "is_configured": true, 00:15:33.191 "data_offset": 0, 00:15:33.191 "data_size": 65536 00:15:33.191 }, 00:15:33.191 { 00:15:33.191 "name": "BaseBdev2", 00:15:33.191 "uuid": "a3107960-a483-5af9-aa38-42afbb2dfca7", 00:15:33.191 "is_configured": true, 00:15:33.191 "data_offset": 0, 00:15:33.191 "data_size": 65536 00:15:33.191 }, 00:15:33.191 { 00:15:33.191 "name": "BaseBdev3", 00:15:33.191 "uuid": "08d58465-45ab-5212-a047-3a7a43f70623", 00:15:33.191 "is_configured": true, 00:15:33.191 "data_offset": 0, 00:15:33.191 "data_size": 65536 00:15:33.191 } 00:15:33.191 ] 00:15:33.191 }' 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.191 17:07:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.758 [2024-11-20 17:07:57.394446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.758 [2024-11-20 17:07:57.394679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.758 [2024-11-20 17:07:57.394929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.758 [2024-11-20 17:07:57.395172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.758 [2024-11-20 17:07:57.395395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.758 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:34.016 /dev/nbd0 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.016 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.017 1+0 records in 00:15:34.017 1+0 records out 00:15:34.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545674 s, 7.5 MB/s 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.017 17:07:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:34.284 /dev/nbd1 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.284 1+0 records in 00:15:34.284 1+0 records out 00:15:34.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300435 s, 13.6 MB/s 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.284 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.285 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.285 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.555 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.814 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:35.072 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.072 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.072 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81711 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81711 ']' 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81711 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81711 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.073 killing process with pid 81711 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81711' 00:15:35.073 Received shutdown signal, test time was about 60.000000 seconds 00:15:35.073 00:15:35.073 Latency(us) 00:15:35.073 [2024-11-20T17:07:58.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.073 [2024-11-20T17:07:58.942Z] =================================================================================================================== 00:15:35.073 [2024-11-20T17:07:58.942Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81711 00:15:35.073 [2024-11-20 17:07:58.893948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.073 17:07:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81711 00:15:35.640 [2024-11-20 17:07:59.223818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:36.576 00:15:36.576 real 0m16.269s 00:15:36.576 user 0m20.760s 00:15:36.576 sys 0m2.020s 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.576 ************************************ 00:15:36.576 END TEST raid5f_rebuild_test 00:15:36.576 ************************************ 00:15:36.576 17:08:00 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:36.576 17:08:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:36.576 17:08:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.576 17:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.576 ************************************ 00:15:36.576 START TEST raid5f_rebuild_test_sb 00:15:36.576 ************************************ 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:36.576 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82163 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82163 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82163 ']' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.577 17:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.577 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:36.577 Zero copy mechanism will not be used. 00:15:36.577 [2024-11-20 17:08:00.394986] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:15:36.577 [2024-11-20 17:08:00.395156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82163 ] 00:15:36.836 [2024-11-20 17:08:00.569124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.836 [2024-11-20 17:08:00.701442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.095 [2024-11-20 17:08:00.899057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.095 [2024-11-20 17:08:00.899095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.664 BaseBdev1_malloc 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.664 [2024-11-20 17:08:01.427213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:37.664 [2024-11-20 17:08:01.427292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.664 [2024-11-20 17:08:01.427320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:37.664 [2024-11-20 17:08:01.427338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.664 [2024-11-20 17:08:01.430177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.664 [2024-11-20 17:08:01.430249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.664 BaseBdev1 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.664 BaseBdev2_malloc 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.664 [2024-11-20 17:08:01.480715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:37.664 [2024-11-20 17:08:01.480820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.664 [2024-11-20 17:08:01.480851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:37.664 [2024-11-20 17:08:01.480868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.664 [2024-11-20 17:08:01.483598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.664 [2024-11-20 17:08:01.483640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.664 BaseBdev2 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.664 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 BaseBdev3_malloc 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 [2024-11-20 17:08:01.549364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:37.925 [2024-11-20 17:08:01.549441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.925 [2024-11-20 17:08:01.549469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:37.925 [2024-11-20 17:08:01.549486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.925 [2024-11-20 17:08:01.552287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.925 [2024-11-20 17:08:01.552507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:37.925 BaseBdev3 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 spare_malloc 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 spare_delay 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 [2024-11-20 17:08:01.610090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.925 [2024-11-20 17:08:01.610174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.925 [2024-11-20 17:08:01.610213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:37.925 [2024-11-20 17:08:01.610229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.925 [2024-11-20 17:08:01.612993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.925 [2024-11-20 17:08:01.613044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.925 spare 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 [2024-11-20 17:08:01.618196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.925 [2024-11-20 17:08:01.620710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.925 [2024-11-20 17:08:01.620822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.925 [2024-11-20 17:08:01.621216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:37.925 [2024-11-20 17:08:01.621347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.925 [2024-11-20 17:08:01.621699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:37.925 [2024-11-20 17:08:01.627155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:37.925 [2024-11-20 17:08:01.627296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:37.925 [2024-11-20 17:08:01.627740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.925 "name": "raid_bdev1", 00:15:37.925 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:37.925 "strip_size_kb": 64, 00:15:37.925 "state": "online", 00:15:37.925 "raid_level": "raid5f", 00:15:37.925 "superblock": true, 00:15:37.925 "num_base_bdevs": 3, 00:15:37.925 "num_base_bdevs_discovered": 3, 00:15:37.925 "num_base_bdevs_operational": 3, 00:15:37.925 "base_bdevs_list": [ 00:15:37.925 { 00:15:37.925 "name": "BaseBdev1", 00:15:37.925 "uuid": "db3461f7-d40a-5fd7-845f-739a03b899a3", 00:15:37.925 "is_configured": true, 00:15:37.925 "data_offset": 2048, 00:15:37.925 "data_size": 63488 00:15:37.925 }, 00:15:37.925 { 00:15:37.925 "name": "BaseBdev2", 00:15:37.925 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:37.926 "is_configured": true, 00:15:37.926 "data_offset": 2048, 00:15:37.926 "data_size": 63488 00:15:37.926 }, 00:15:37.926 { 00:15:37.926 "name": "BaseBdev3", 00:15:37.926 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:37.926 "is_configured": true, 00:15:37.926 "data_offset": 2048, 00:15:37.926 "data_size": 63488 00:15:37.926 } 00:15:37.926 ] 00:15:37.926 }' 00:15:37.926 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.926 17:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.493 [2024-11-20 17:08:02.161877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.493 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:38.751 [2024-11-20 17:08:02.569888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:38.751 /dev/nbd0 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.751 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.009 1+0 records in 00:15:39.009 1+0 records out 00:15:39.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478685 s, 8.6 MB/s 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:39.009 17:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:39.267 496+0 records in 00:15:39.267 496+0 records out 00:15:39.267 65011712 bytes (65 MB, 62 MiB) copied, 0.460406 s, 141 MB/s 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.267 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.832 [2024-11-20 17:08:03.420599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.832 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.833 [2024-11-20 17:08:03.434615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.833 "name": "raid_bdev1", 00:15:39.833 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:39.833 "strip_size_kb": 64, 00:15:39.833 "state": "online", 00:15:39.833 "raid_level": "raid5f", 00:15:39.833 "superblock": true, 00:15:39.833 "num_base_bdevs": 3, 00:15:39.833 "num_base_bdevs_discovered": 2, 00:15:39.833 "num_base_bdevs_operational": 2, 00:15:39.833 "base_bdevs_list": [ 00:15:39.833 { 00:15:39.833 "name": null, 00:15:39.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.833 "is_configured": false, 00:15:39.833 "data_offset": 0, 00:15:39.833 "data_size": 63488 00:15:39.833 }, 00:15:39.833 { 00:15:39.833 "name": "BaseBdev2", 00:15:39.833 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:39.833 "is_configured": true, 00:15:39.833 "data_offset": 2048, 00:15:39.833 "data_size": 63488 00:15:39.833 }, 00:15:39.833 { 00:15:39.833 "name": "BaseBdev3", 00:15:39.833 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:39.833 "is_configured": true, 00:15:39.833 "data_offset": 2048, 00:15:39.833 "data_size": 63488 00:15:39.833 } 00:15:39.833 ] 00:15:39.833 }' 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.833 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.398 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.398 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.399 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.399 [2024-11-20 17:08:03.978800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.399 [2024-11-20 17:08:03.995422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:40.399 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.399 17:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.399 [2024-11-20 17:08:04.003455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.368 17:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.368 17:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.368 17:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.368 17:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.368 "name": "raid_bdev1", 00:15:41.368 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:41.368 "strip_size_kb": 64, 00:15:41.368 "state": "online", 00:15:41.368 "raid_level": "raid5f", 00:15:41.368 "superblock": true, 00:15:41.368 "num_base_bdevs": 3, 00:15:41.368 "num_base_bdevs_discovered": 3, 00:15:41.368 "num_base_bdevs_operational": 3, 00:15:41.368 "process": { 00:15:41.368 "type": "rebuild", 00:15:41.368 "target": "spare", 00:15:41.368 "progress": { 00:15:41.368 "blocks": 18432, 00:15:41.368 "percent": 14 00:15:41.368 } 00:15:41.368 }, 00:15:41.368 "base_bdevs_list": [ 00:15:41.368 { 00:15:41.368 "name": "spare", 00:15:41.368 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:41.368 "is_configured": true, 00:15:41.368 "data_offset": 2048, 00:15:41.368 "data_size": 63488 00:15:41.368 }, 00:15:41.368 { 00:15:41.368 "name": "BaseBdev2", 00:15:41.368 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:41.368 "is_configured": true, 00:15:41.368 "data_offset": 2048, 00:15:41.368 "data_size": 63488 00:15:41.368 }, 00:15:41.368 { 00:15:41.368 "name": "BaseBdev3", 00:15:41.368 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:41.368 "is_configured": true, 00:15:41.368 "data_offset": 2048, 00:15:41.368 "data_size": 63488 00:15:41.368 } 00:15:41.368 ] 00:15:41.368 }' 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.368 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.368 [2024-11-20 17:08:05.165434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.368 [2024-11-20 17:08:05.218720] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.368 [2024-11-20 17:08:05.218809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.368 [2024-11-20 17:08:05.218848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.368 [2024-11-20 17:08:05.218860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.627 "name": "raid_bdev1", 00:15:41.627 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:41.627 "strip_size_kb": 64, 00:15:41.627 "state": "online", 00:15:41.627 "raid_level": "raid5f", 00:15:41.627 "superblock": true, 00:15:41.627 "num_base_bdevs": 3, 00:15:41.627 "num_base_bdevs_discovered": 2, 00:15:41.627 "num_base_bdevs_operational": 2, 00:15:41.627 "base_bdevs_list": [ 00:15:41.627 { 00:15:41.627 "name": null, 00:15:41.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.627 "is_configured": false, 00:15:41.627 "data_offset": 0, 00:15:41.627 "data_size": 63488 00:15:41.627 }, 00:15:41.627 { 00:15:41.627 "name": "BaseBdev2", 00:15:41.627 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:41.627 "is_configured": true, 00:15:41.627 "data_offset": 2048, 00:15:41.627 "data_size": 63488 00:15:41.627 }, 00:15:41.627 { 00:15:41.627 "name": "BaseBdev3", 00:15:41.627 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:41.627 "is_configured": true, 00:15:41.627 "data_offset": 2048, 00:15:41.627 "data_size": 63488 00:15:41.627 } 00:15:41.627 ] 00:15:41.627 }' 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.627 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.195 "name": "raid_bdev1", 00:15:42.195 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:42.195 "strip_size_kb": 64, 00:15:42.195 "state": "online", 00:15:42.195 "raid_level": "raid5f", 00:15:42.195 "superblock": true, 00:15:42.195 "num_base_bdevs": 3, 00:15:42.195 "num_base_bdevs_discovered": 2, 00:15:42.195 "num_base_bdevs_operational": 2, 00:15:42.195 "base_bdevs_list": [ 00:15:42.195 { 00:15:42.195 "name": null, 00:15:42.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.195 "is_configured": false, 00:15:42.195 "data_offset": 0, 00:15:42.195 "data_size": 63488 00:15:42.195 }, 00:15:42.195 { 00:15:42.195 "name": "BaseBdev2", 00:15:42.195 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:42.195 "is_configured": true, 00:15:42.195 "data_offset": 2048, 00:15:42.195 "data_size": 63488 00:15:42.195 }, 00:15:42.195 { 00:15:42.195 "name": "BaseBdev3", 00:15:42.195 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:42.195 "is_configured": true, 00:15:42.195 "data_offset": 2048, 00:15:42.195 "data_size": 63488 00:15:42.195 } 00:15:42.195 ] 00:15:42.195 }' 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.195 [2024-11-20 17:08:05.938977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.195 [2024-11-20 17:08:05.954574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.195 17:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.195 [2024-11-20 17:08:05.962194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.131 17:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.390 "name": "raid_bdev1", 00:15:43.390 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:43.390 "strip_size_kb": 64, 00:15:43.390 "state": "online", 00:15:43.390 "raid_level": "raid5f", 00:15:43.390 "superblock": true, 00:15:43.390 "num_base_bdevs": 3, 00:15:43.390 "num_base_bdevs_discovered": 3, 00:15:43.390 "num_base_bdevs_operational": 3, 00:15:43.390 "process": { 00:15:43.390 "type": "rebuild", 00:15:43.390 "target": "spare", 00:15:43.390 "progress": { 00:15:43.390 "blocks": 18432, 00:15:43.390 "percent": 14 00:15:43.390 } 00:15:43.390 }, 00:15:43.390 "base_bdevs_list": [ 00:15:43.390 { 00:15:43.390 "name": "spare", 00:15:43.390 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 }, 00:15:43.390 { 00:15:43.390 "name": "BaseBdev2", 00:15:43.390 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 }, 00:15:43.390 { 00:15:43.390 "name": "BaseBdev3", 00:15:43.390 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 } 00:15:43.390 ] 00:15:43.390 }' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:43.390 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=603 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.390 "name": "raid_bdev1", 00:15:43.390 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:43.390 "strip_size_kb": 64, 00:15:43.390 "state": "online", 00:15:43.390 "raid_level": "raid5f", 00:15:43.390 "superblock": true, 00:15:43.390 "num_base_bdevs": 3, 00:15:43.390 "num_base_bdevs_discovered": 3, 00:15:43.390 "num_base_bdevs_operational": 3, 00:15:43.390 "process": { 00:15:43.390 "type": "rebuild", 00:15:43.390 "target": "spare", 00:15:43.390 "progress": { 00:15:43.390 "blocks": 22528, 00:15:43.390 "percent": 17 00:15:43.390 } 00:15:43.390 }, 00:15:43.390 "base_bdevs_list": [ 00:15:43.390 { 00:15:43.390 "name": "spare", 00:15:43.390 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 }, 00:15:43.390 { 00:15:43.390 "name": "BaseBdev2", 00:15:43.390 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 }, 00:15:43.390 { 00:15:43.390 "name": "BaseBdev3", 00:15:43.390 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:43.390 "is_configured": true, 00:15:43.390 "data_offset": 2048, 00:15:43.390 "data_size": 63488 00:15:43.390 } 00:15:43.390 ] 00:15:43.390 }' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.390 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.648 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.648 17:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.584 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.584 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.584 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.584 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.585 "name": "raid_bdev1", 00:15:44.585 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:44.585 "strip_size_kb": 64, 00:15:44.585 "state": "online", 00:15:44.585 "raid_level": "raid5f", 00:15:44.585 "superblock": true, 00:15:44.585 "num_base_bdevs": 3, 00:15:44.585 "num_base_bdevs_discovered": 3, 00:15:44.585 "num_base_bdevs_operational": 3, 00:15:44.585 "process": { 00:15:44.585 "type": "rebuild", 00:15:44.585 "target": "spare", 00:15:44.585 "progress": { 00:15:44.585 "blocks": 45056, 00:15:44.585 "percent": 35 00:15:44.585 } 00:15:44.585 }, 00:15:44.585 "base_bdevs_list": [ 00:15:44.585 { 00:15:44.585 "name": "spare", 00:15:44.585 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 }, 00:15:44.585 { 00:15:44.585 "name": "BaseBdev2", 00:15:44.585 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 }, 00:15:44.585 { 00:15:44.585 "name": "BaseBdev3", 00:15:44.585 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:44.585 "is_configured": true, 00:15:44.585 "data_offset": 2048, 00:15:44.585 "data_size": 63488 00:15:44.585 } 00:15:44.585 ] 00:15:44.585 }' 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.585 17:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.974 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.974 "name": "raid_bdev1", 00:15:45.974 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:45.974 "strip_size_kb": 64, 00:15:45.974 "state": "online", 00:15:45.974 "raid_level": "raid5f", 00:15:45.974 "superblock": true, 00:15:45.974 "num_base_bdevs": 3, 00:15:45.974 "num_base_bdevs_discovered": 3, 00:15:45.974 "num_base_bdevs_operational": 3, 00:15:45.974 "process": { 00:15:45.974 "type": "rebuild", 00:15:45.974 "target": "spare", 00:15:45.974 "progress": { 00:15:45.974 "blocks": 69632, 00:15:45.974 "percent": 54 00:15:45.974 } 00:15:45.974 }, 00:15:45.974 "base_bdevs_list": [ 00:15:45.974 { 00:15:45.974 "name": "spare", 00:15:45.975 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:45.975 "is_configured": true, 00:15:45.975 "data_offset": 2048, 00:15:45.975 "data_size": 63488 00:15:45.975 }, 00:15:45.975 { 00:15:45.975 "name": "BaseBdev2", 00:15:45.975 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:45.975 "is_configured": true, 00:15:45.975 "data_offset": 2048, 00:15:45.975 "data_size": 63488 00:15:45.975 }, 00:15:45.975 { 00:15:45.975 "name": "BaseBdev3", 00:15:45.975 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:45.975 "is_configured": true, 00:15:45.975 "data_offset": 2048, 00:15:45.975 "data_size": 63488 00:15:45.975 } 00:15:45.975 ] 00:15:45.975 }' 00:15:45.975 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.975 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.975 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.975 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.975 17:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.910 "name": "raid_bdev1", 00:15:46.910 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:46.910 "strip_size_kb": 64, 00:15:46.910 "state": "online", 00:15:46.910 "raid_level": "raid5f", 00:15:46.910 "superblock": true, 00:15:46.910 "num_base_bdevs": 3, 00:15:46.910 "num_base_bdevs_discovered": 3, 00:15:46.910 "num_base_bdevs_operational": 3, 00:15:46.910 "process": { 00:15:46.910 "type": "rebuild", 00:15:46.910 "target": "spare", 00:15:46.910 "progress": { 00:15:46.910 "blocks": 94208, 00:15:46.910 "percent": 74 00:15:46.910 } 00:15:46.910 }, 00:15:46.910 "base_bdevs_list": [ 00:15:46.910 { 00:15:46.910 "name": "spare", 00:15:46.910 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:46.910 "is_configured": true, 00:15:46.910 "data_offset": 2048, 00:15:46.910 "data_size": 63488 00:15:46.910 }, 00:15:46.910 { 00:15:46.910 "name": "BaseBdev2", 00:15:46.910 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:46.910 "is_configured": true, 00:15:46.910 "data_offset": 2048, 00:15:46.910 "data_size": 63488 00:15:46.910 }, 00:15:46.910 { 00:15:46.910 "name": "BaseBdev3", 00:15:46.910 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:46.910 "is_configured": true, 00:15:46.910 "data_offset": 2048, 00:15:46.910 "data_size": 63488 00:15:46.910 } 00:15:46.910 ] 00:15:46.910 }' 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.910 17:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.288 "name": "raid_bdev1", 00:15:48.288 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:48.288 "strip_size_kb": 64, 00:15:48.288 "state": "online", 00:15:48.288 "raid_level": "raid5f", 00:15:48.288 "superblock": true, 00:15:48.288 "num_base_bdevs": 3, 00:15:48.288 "num_base_bdevs_discovered": 3, 00:15:48.288 "num_base_bdevs_operational": 3, 00:15:48.288 "process": { 00:15:48.288 "type": "rebuild", 00:15:48.288 "target": "spare", 00:15:48.288 "progress": { 00:15:48.288 "blocks": 116736, 00:15:48.288 "percent": 91 00:15:48.288 } 00:15:48.288 }, 00:15:48.288 "base_bdevs_list": [ 00:15:48.288 { 00:15:48.288 "name": "spare", 00:15:48.288 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:48.288 "is_configured": true, 00:15:48.288 "data_offset": 2048, 00:15:48.288 "data_size": 63488 00:15:48.288 }, 00:15:48.288 { 00:15:48.288 "name": "BaseBdev2", 00:15:48.288 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:48.288 "is_configured": true, 00:15:48.288 "data_offset": 2048, 00:15:48.288 "data_size": 63488 00:15:48.288 }, 00:15:48.288 { 00:15:48.288 "name": "BaseBdev3", 00:15:48.288 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:48.288 "is_configured": true, 00:15:48.288 "data_offset": 2048, 00:15:48.288 "data_size": 63488 00:15:48.288 } 00:15:48.288 ] 00:15:48.288 }' 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.288 17:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.587 [2024-11-20 17:08:12.234244] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:48.588 [2024-11-20 17:08:12.234349] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:48.588 [2024-11-20 17:08:12.234487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.163 "name": "raid_bdev1", 00:15:49.163 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:49.163 "strip_size_kb": 64, 00:15:49.163 "state": "online", 00:15:49.163 "raid_level": "raid5f", 00:15:49.163 "superblock": true, 00:15:49.163 "num_base_bdevs": 3, 00:15:49.163 "num_base_bdevs_discovered": 3, 00:15:49.163 "num_base_bdevs_operational": 3, 00:15:49.163 "base_bdevs_list": [ 00:15:49.163 { 00:15:49.163 "name": "spare", 00:15:49.163 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:49.163 "is_configured": true, 00:15:49.163 "data_offset": 2048, 00:15:49.163 "data_size": 63488 00:15:49.163 }, 00:15:49.163 { 00:15:49.163 "name": "BaseBdev2", 00:15:49.163 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:49.163 "is_configured": true, 00:15:49.163 "data_offset": 2048, 00:15:49.163 "data_size": 63488 00:15:49.163 }, 00:15:49.163 { 00:15:49.163 "name": "BaseBdev3", 00:15:49.163 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:49.163 "is_configured": true, 00:15:49.163 "data_offset": 2048, 00:15:49.163 "data_size": 63488 00:15:49.163 } 00:15:49.163 ] 00:15:49.163 }' 00:15:49.163 17:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.422 "name": "raid_bdev1", 00:15:49.422 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:49.422 "strip_size_kb": 64, 00:15:49.422 "state": "online", 00:15:49.422 "raid_level": "raid5f", 00:15:49.422 "superblock": true, 00:15:49.422 "num_base_bdevs": 3, 00:15:49.422 "num_base_bdevs_discovered": 3, 00:15:49.422 "num_base_bdevs_operational": 3, 00:15:49.422 "base_bdevs_list": [ 00:15:49.422 { 00:15:49.422 "name": "spare", 00:15:49.422 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:49.422 "is_configured": true, 00:15:49.422 "data_offset": 2048, 00:15:49.422 "data_size": 63488 00:15:49.422 }, 00:15:49.422 { 00:15:49.422 "name": "BaseBdev2", 00:15:49.422 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:49.422 "is_configured": true, 00:15:49.422 "data_offset": 2048, 00:15:49.422 "data_size": 63488 00:15:49.422 }, 00:15:49.422 { 00:15:49.422 "name": "BaseBdev3", 00:15:49.422 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:49.422 "is_configured": true, 00:15:49.422 "data_offset": 2048, 00:15:49.422 "data_size": 63488 00:15:49.422 } 00:15:49.422 ] 00:15:49.422 }' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.422 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.423 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.423 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.423 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.682 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.682 "name": "raid_bdev1", 00:15:49.682 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:49.682 "strip_size_kb": 64, 00:15:49.682 "state": "online", 00:15:49.682 "raid_level": "raid5f", 00:15:49.682 "superblock": true, 00:15:49.682 "num_base_bdevs": 3, 00:15:49.682 "num_base_bdevs_discovered": 3, 00:15:49.682 "num_base_bdevs_operational": 3, 00:15:49.682 "base_bdevs_list": [ 00:15:49.682 { 00:15:49.682 "name": "spare", 00:15:49.682 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:49.682 "is_configured": true, 00:15:49.682 "data_offset": 2048, 00:15:49.682 "data_size": 63488 00:15:49.682 }, 00:15:49.682 { 00:15:49.682 "name": "BaseBdev2", 00:15:49.682 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:49.682 "is_configured": true, 00:15:49.682 "data_offset": 2048, 00:15:49.682 "data_size": 63488 00:15:49.682 }, 00:15:49.682 { 00:15:49.682 "name": "BaseBdev3", 00:15:49.682 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:49.682 "is_configured": true, 00:15:49.682 "data_offset": 2048, 00:15:49.682 "data_size": 63488 00:15:49.682 } 00:15:49.682 ] 00:15:49.682 }' 00:15:49.682 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.682 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.941 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.941 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.941 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.200 [2024-11-20 17:08:13.810940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.200 [2024-11-20 17:08:13.810973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.200 [2024-11-20 17:08:13.811069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.200 [2024-11-20 17:08:13.811196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.200 [2024-11-20 17:08:13.811242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.200 17:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:50.459 /dev/nbd0 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.459 1+0 records in 00:15:50.459 1+0 records out 00:15:50.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288293 s, 14.2 MB/s 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.459 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:50.718 /dev/nbd1 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.718 1+0 records in 00:15:50.718 1+0 records out 00:15:50.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038863 s, 10.5 MB/s 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.718 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.976 17:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.235 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.494 [2024-11-20 17:08:15.330471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.494 [2024-11-20 17:08:15.330552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.494 [2024-11-20 17:08:15.330580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.494 [2024-11-20 17:08:15.330595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.494 [2024-11-20 17:08:15.333450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.494 [2024-11-20 17:08:15.333508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.494 [2024-11-20 17:08:15.333602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:51.494 [2024-11-20 17:08:15.333661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.494 [2024-11-20 17:08:15.333840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.494 [2024-11-20 17:08:15.333970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.494 spare 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.494 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.754 [2024-11-20 17:08:15.434109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:51.754 [2024-11-20 17:08:15.434139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.754 [2024-11-20 17:08:15.434406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:51.754 [2024-11-20 17:08:15.438984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:51.754 [2024-11-20 17:08:15.439007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:51.754 [2024-11-20 17:08:15.439238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.754 "name": "raid_bdev1", 00:15:51.754 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:51.754 "strip_size_kb": 64, 00:15:51.754 "state": "online", 00:15:51.754 "raid_level": "raid5f", 00:15:51.754 "superblock": true, 00:15:51.754 "num_base_bdevs": 3, 00:15:51.754 "num_base_bdevs_discovered": 3, 00:15:51.754 "num_base_bdevs_operational": 3, 00:15:51.754 "base_bdevs_list": [ 00:15:51.754 { 00:15:51.754 "name": "spare", 00:15:51.754 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:51.754 "is_configured": true, 00:15:51.754 "data_offset": 2048, 00:15:51.754 "data_size": 63488 00:15:51.754 }, 00:15:51.754 { 00:15:51.754 "name": "BaseBdev2", 00:15:51.754 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:51.754 "is_configured": true, 00:15:51.754 "data_offset": 2048, 00:15:51.754 "data_size": 63488 00:15:51.754 }, 00:15:51.754 { 00:15:51.754 "name": "BaseBdev3", 00:15:51.754 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:51.754 "is_configured": true, 00:15:51.754 "data_offset": 2048, 00:15:51.754 "data_size": 63488 00:15:51.754 } 00:15:51.754 ] 00:15:51.754 }' 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.754 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.321 "name": "raid_bdev1", 00:15:52.321 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:52.321 "strip_size_kb": 64, 00:15:52.321 "state": "online", 00:15:52.321 "raid_level": "raid5f", 00:15:52.321 "superblock": true, 00:15:52.321 "num_base_bdevs": 3, 00:15:52.321 "num_base_bdevs_discovered": 3, 00:15:52.321 "num_base_bdevs_operational": 3, 00:15:52.321 "base_bdevs_list": [ 00:15:52.321 { 00:15:52.321 "name": "spare", 00:15:52.321 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:52.321 "is_configured": true, 00:15:52.321 "data_offset": 2048, 00:15:52.321 "data_size": 63488 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "name": "BaseBdev2", 00:15:52.321 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:52.321 "is_configured": true, 00:15:52.321 "data_offset": 2048, 00:15:52.321 "data_size": 63488 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "name": "BaseBdev3", 00:15:52.321 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:52.321 "is_configured": true, 00:15:52.321 "data_offset": 2048, 00:15:52.321 "data_size": 63488 00:15:52.321 } 00:15:52.321 ] 00:15:52.321 }' 00:15:52.321 17:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.321 [2024-11-20 17:08:16.133232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.321 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.322 "name": "raid_bdev1", 00:15:52.322 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:52.322 "strip_size_kb": 64, 00:15:52.322 "state": "online", 00:15:52.322 "raid_level": "raid5f", 00:15:52.322 "superblock": true, 00:15:52.322 "num_base_bdevs": 3, 00:15:52.322 "num_base_bdevs_discovered": 2, 00:15:52.322 "num_base_bdevs_operational": 2, 00:15:52.322 "base_bdevs_list": [ 00:15:52.322 { 00:15:52.322 "name": null, 00:15:52.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.322 "is_configured": false, 00:15:52.322 "data_offset": 0, 00:15:52.322 "data_size": 63488 00:15:52.322 }, 00:15:52.322 { 00:15:52.322 "name": "BaseBdev2", 00:15:52.322 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:52.322 "is_configured": true, 00:15:52.322 "data_offset": 2048, 00:15:52.322 "data_size": 63488 00:15:52.322 }, 00:15:52.322 { 00:15:52.322 "name": "BaseBdev3", 00:15:52.322 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:52.322 "is_configured": true, 00:15:52.322 "data_offset": 2048, 00:15:52.322 "data_size": 63488 00:15:52.322 } 00:15:52.322 ] 00:15:52.322 }' 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.322 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.888 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.889 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.889 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.889 [2024-11-20 17:08:16.625423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.889 [2024-11-20 17:08:16.625625] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.889 [2024-11-20 17:08:16.625649] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:52.889 [2024-11-20 17:08:16.625705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.889 [2024-11-20 17:08:16.639181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:52.889 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.889 17:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:52.889 [2024-11-20 17:08:16.645829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.824 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.083 "name": "raid_bdev1", 00:15:54.083 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:54.083 "strip_size_kb": 64, 00:15:54.083 "state": "online", 00:15:54.083 "raid_level": "raid5f", 00:15:54.083 "superblock": true, 00:15:54.083 "num_base_bdevs": 3, 00:15:54.083 "num_base_bdevs_discovered": 3, 00:15:54.083 "num_base_bdevs_operational": 3, 00:15:54.083 "process": { 00:15:54.083 "type": "rebuild", 00:15:54.083 "target": "spare", 00:15:54.083 "progress": { 00:15:54.083 "blocks": 18432, 00:15:54.083 "percent": 14 00:15:54.083 } 00:15:54.083 }, 00:15:54.083 "base_bdevs_list": [ 00:15:54.083 { 00:15:54.083 "name": "spare", 00:15:54.083 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:54.083 "is_configured": true, 00:15:54.083 "data_offset": 2048, 00:15:54.083 "data_size": 63488 00:15:54.083 }, 00:15:54.083 { 00:15:54.083 "name": "BaseBdev2", 00:15:54.083 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:54.083 "is_configured": true, 00:15:54.083 "data_offset": 2048, 00:15:54.083 "data_size": 63488 00:15:54.083 }, 00:15:54.083 { 00:15:54.083 "name": "BaseBdev3", 00:15:54.083 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:54.083 "is_configured": true, 00:15:54.083 "data_offset": 2048, 00:15:54.083 "data_size": 63488 00:15:54.083 } 00:15:54.083 ] 00:15:54.083 }' 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.083 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.083 [2024-11-20 17:08:17.807442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.084 [2024-11-20 17:08:17.858286] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.084 [2024-11-20 17:08:17.858369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.084 [2024-11-20 17:08:17.858401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.084 [2024-11-20 17:08:17.858414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.084 "name": "raid_bdev1", 00:15:54.084 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:54.084 "strip_size_kb": 64, 00:15:54.084 "state": "online", 00:15:54.084 "raid_level": "raid5f", 00:15:54.084 "superblock": true, 00:15:54.084 "num_base_bdevs": 3, 00:15:54.084 "num_base_bdevs_discovered": 2, 00:15:54.084 "num_base_bdevs_operational": 2, 00:15:54.084 "base_bdevs_list": [ 00:15:54.084 { 00:15:54.084 "name": null, 00:15:54.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.084 "is_configured": false, 00:15:54.084 "data_offset": 0, 00:15:54.084 "data_size": 63488 00:15:54.084 }, 00:15:54.084 { 00:15:54.084 "name": "BaseBdev2", 00:15:54.084 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:54.084 "is_configured": true, 00:15:54.084 "data_offset": 2048, 00:15:54.084 "data_size": 63488 00:15:54.084 }, 00:15:54.084 { 00:15:54.084 "name": "BaseBdev3", 00:15:54.084 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:54.084 "is_configured": true, 00:15:54.084 "data_offset": 2048, 00:15:54.084 "data_size": 63488 00:15:54.084 } 00:15:54.084 ] 00:15:54.084 }' 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.084 17:08:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.652 17:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.652 17:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.652 17:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.652 [2024-11-20 17:08:18.393705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.652 [2024-11-20 17:08:18.393834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.652 [2024-11-20 17:08:18.393863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:54.652 [2024-11-20 17:08:18.393882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.652 [2024-11-20 17:08:18.394569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.652 [2024-11-20 17:08:18.394603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.652 [2024-11-20 17:08:18.394709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:54.652 [2024-11-20 17:08:18.394733] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.652 [2024-11-20 17:08:18.394745] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:54.652 [2024-11-20 17:08:18.394790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.652 [2024-11-20 17:08:18.408680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:54.652 spare 00:15:54.652 17:08:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.652 17:08:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:54.652 [2024-11-20 17:08:18.415913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.618 "name": "raid_bdev1", 00:15:55.618 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:55.618 "strip_size_kb": 64, 00:15:55.618 "state": "online", 00:15:55.618 "raid_level": "raid5f", 00:15:55.618 "superblock": true, 00:15:55.618 "num_base_bdevs": 3, 00:15:55.618 "num_base_bdevs_discovered": 3, 00:15:55.618 "num_base_bdevs_operational": 3, 00:15:55.618 "process": { 00:15:55.618 "type": "rebuild", 00:15:55.618 "target": "spare", 00:15:55.618 "progress": { 00:15:55.618 "blocks": 18432, 00:15:55.618 "percent": 14 00:15:55.618 } 00:15:55.618 }, 00:15:55.618 "base_bdevs_list": [ 00:15:55.618 { 00:15:55.618 "name": "spare", 00:15:55.618 "uuid": "fe197531-1010-5903-90df-b3d8bc540d1e", 00:15:55.618 "is_configured": true, 00:15:55.618 "data_offset": 2048, 00:15:55.618 "data_size": 63488 00:15:55.618 }, 00:15:55.618 { 00:15:55.618 "name": "BaseBdev2", 00:15:55.618 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:55.618 "is_configured": true, 00:15:55.618 "data_offset": 2048, 00:15:55.618 "data_size": 63488 00:15:55.618 }, 00:15:55.618 { 00:15:55.618 "name": "BaseBdev3", 00:15:55.618 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:55.618 "is_configured": true, 00:15:55.618 "data_offset": 2048, 00:15:55.618 "data_size": 63488 00:15:55.618 } 00:15:55.618 ] 00:15:55.618 }' 00:15:55.618 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.877 [2024-11-20 17:08:19.577395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.877 [2024-11-20 17:08:19.628391] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:55.877 [2024-11-20 17:08:19.628622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.877 [2024-11-20 17:08:19.628661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.877 [2024-11-20 17:08:19.628673] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.877 "name": "raid_bdev1", 00:15:55.877 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:55.877 "strip_size_kb": 64, 00:15:55.877 "state": "online", 00:15:55.877 "raid_level": "raid5f", 00:15:55.877 "superblock": true, 00:15:55.877 "num_base_bdevs": 3, 00:15:55.877 "num_base_bdevs_discovered": 2, 00:15:55.877 "num_base_bdevs_operational": 2, 00:15:55.877 "base_bdevs_list": [ 00:15:55.877 { 00:15:55.877 "name": null, 00:15:55.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.877 "is_configured": false, 00:15:55.877 "data_offset": 0, 00:15:55.877 "data_size": 63488 00:15:55.877 }, 00:15:55.877 { 00:15:55.877 "name": "BaseBdev2", 00:15:55.877 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:55.877 "is_configured": true, 00:15:55.877 "data_offset": 2048, 00:15:55.877 "data_size": 63488 00:15:55.877 }, 00:15:55.877 { 00:15:55.877 "name": "BaseBdev3", 00:15:55.877 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:55.877 "is_configured": true, 00:15:55.877 "data_offset": 2048, 00:15:55.877 "data_size": 63488 00:15:55.877 } 00:15:55.877 ] 00:15:55.877 }' 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.877 17:08:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.445 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.446 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.446 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.446 "name": "raid_bdev1", 00:15:56.446 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:56.446 "strip_size_kb": 64, 00:15:56.446 "state": "online", 00:15:56.446 "raid_level": "raid5f", 00:15:56.446 "superblock": true, 00:15:56.446 "num_base_bdevs": 3, 00:15:56.446 "num_base_bdevs_discovered": 2, 00:15:56.446 "num_base_bdevs_operational": 2, 00:15:56.446 "base_bdevs_list": [ 00:15:56.446 { 00:15:56.446 "name": null, 00:15:56.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.446 "is_configured": false, 00:15:56.446 "data_offset": 0, 00:15:56.446 "data_size": 63488 00:15:56.446 }, 00:15:56.446 { 00:15:56.446 "name": "BaseBdev2", 00:15:56.446 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:56.446 "is_configured": true, 00:15:56.446 "data_offset": 2048, 00:15:56.446 "data_size": 63488 00:15:56.446 }, 00:15:56.446 { 00:15:56.446 "name": "BaseBdev3", 00:15:56.446 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:56.446 "is_configured": true, 00:15:56.446 "data_offset": 2048, 00:15:56.446 "data_size": 63488 00:15:56.446 } 00:15:56.446 ] 00:15:56.446 }' 00:15:56.446 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.446 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.446 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.704 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.704 [2024-11-20 17:08:20.335688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.704 [2024-11-20 17:08:20.335751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.704 [2024-11-20 17:08:20.335796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:56.704 [2024-11-20 17:08:20.335812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.704 [2024-11-20 17:08:20.336399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.704 [2024-11-20 17:08:20.336427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.704 [2024-11-20 17:08:20.336532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:56.704 [2024-11-20 17:08:20.336585] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:56.704 [2024-11-20 17:08:20.336608] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:56.705 [2024-11-20 17:08:20.336620] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:56.705 BaseBdev1 00:15:56.705 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.705 17:08:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.640 "name": "raid_bdev1", 00:15:57.640 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:57.640 "strip_size_kb": 64, 00:15:57.640 "state": "online", 00:15:57.640 "raid_level": "raid5f", 00:15:57.640 "superblock": true, 00:15:57.640 "num_base_bdevs": 3, 00:15:57.640 "num_base_bdevs_discovered": 2, 00:15:57.640 "num_base_bdevs_operational": 2, 00:15:57.640 "base_bdevs_list": [ 00:15:57.640 { 00:15:57.640 "name": null, 00:15:57.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.640 "is_configured": false, 00:15:57.640 "data_offset": 0, 00:15:57.640 "data_size": 63488 00:15:57.640 }, 00:15:57.640 { 00:15:57.640 "name": "BaseBdev2", 00:15:57.640 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:57.640 "is_configured": true, 00:15:57.640 "data_offset": 2048, 00:15:57.640 "data_size": 63488 00:15:57.640 }, 00:15:57.640 { 00:15:57.640 "name": "BaseBdev3", 00:15:57.640 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:57.640 "is_configured": true, 00:15:57.640 "data_offset": 2048, 00:15:57.640 "data_size": 63488 00:15:57.640 } 00:15:57.640 ] 00:15:57.640 }' 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.640 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.209 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.209 "name": "raid_bdev1", 00:15:58.209 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:58.209 "strip_size_kb": 64, 00:15:58.209 "state": "online", 00:15:58.209 "raid_level": "raid5f", 00:15:58.209 "superblock": true, 00:15:58.209 "num_base_bdevs": 3, 00:15:58.209 "num_base_bdevs_discovered": 2, 00:15:58.209 "num_base_bdevs_operational": 2, 00:15:58.209 "base_bdevs_list": [ 00:15:58.209 { 00:15:58.209 "name": null, 00:15:58.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.209 "is_configured": false, 00:15:58.209 "data_offset": 0, 00:15:58.209 "data_size": 63488 00:15:58.209 }, 00:15:58.209 { 00:15:58.209 "name": "BaseBdev2", 00:15:58.209 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:58.209 "is_configured": true, 00:15:58.209 "data_offset": 2048, 00:15:58.209 "data_size": 63488 00:15:58.209 }, 00:15:58.209 { 00:15:58.209 "name": "BaseBdev3", 00:15:58.209 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:58.210 "is_configured": true, 00:15:58.210 "data_offset": 2048, 00:15:58.210 "data_size": 63488 00:15:58.210 } 00:15:58.210 ] 00:15:58.210 }' 00:15:58.210 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.210 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.210 17:08:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.210 [2024-11-20 17:08:22.032345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.210 [2024-11-20 17:08:22.032691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:58.210 [2024-11-20 17:08:22.032722] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:58.210 request: 00:15:58.210 { 00:15:58.210 "base_bdev": "BaseBdev1", 00:15:58.210 "raid_bdev": "raid_bdev1", 00:15:58.210 "method": "bdev_raid_add_base_bdev", 00:15:58.210 "req_id": 1 00:15:58.210 } 00:15:58.210 Got JSON-RPC error response 00:15:58.210 response: 00:15:58.210 { 00:15:58.210 "code": -22, 00:15:58.210 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:58.210 } 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.210 17:08:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.587 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.587 "name": "raid_bdev1", 00:15:59.587 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:59.587 "strip_size_kb": 64, 00:15:59.587 "state": "online", 00:15:59.587 "raid_level": "raid5f", 00:15:59.587 "superblock": true, 00:15:59.587 "num_base_bdevs": 3, 00:15:59.587 "num_base_bdevs_discovered": 2, 00:15:59.587 "num_base_bdevs_operational": 2, 00:15:59.587 "base_bdevs_list": [ 00:15:59.587 { 00:15:59.587 "name": null, 00:15:59.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.587 "is_configured": false, 00:15:59.587 "data_offset": 0, 00:15:59.587 "data_size": 63488 00:15:59.587 }, 00:15:59.587 { 00:15:59.587 "name": "BaseBdev2", 00:15:59.587 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:59.587 "is_configured": true, 00:15:59.587 "data_offset": 2048, 00:15:59.588 "data_size": 63488 00:15:59.588 }, 00:15:59.588 { 00:15:59.588 "name": "BaseBdev3", 00:15:59.588 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:59.588 "is_configured": true, 00:15:59.588 "data_offset": 2048, 00:15:59.588 "data_size": 63488 00:15:59.588 } 00:15:59.588 ] 00:15:59.588 }' 00:15:59.588 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.588 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.846 "name": "raid_bdev1", 00:15:59.846 "uuid": "fa5b038c-0392-48a7-b425-5c7f87ec8cc4", 00:15:59.846 "strip_size_kb": 64, 00:15:59.846 "state": "online", 00:15:59.846 "raid_level": "raid5f", 00:15:59.846 "superblock": true, 00:15:59.846 "num_base_bdevs": 3, 00:15:59.846 "num_base_bdevs_discovered": 2, 00:15:59.846 "num_base_bdevs_operational": 2, 00:15:59.846 "base_bdevs_list": [ 00:15:59.846 { 00:15:59.846 "name": null, 00:15:59.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.846 "is_configured": false, 00:15:59.846 "data_offset": 0, 00:15:59.846 "data_size": 63488 00:15:59.846 }, 00:15:59.846 { 00:15:59.846 "name": "BaseBdev2", 00:15:59.846 "uuid": "6f1f84c8-b6e2-5074-9558-1087feefd637", 00:15:59.846 "is_configured": true, 00:15:59.846 "data_offset": 2048, 00:15:59.846 "data_size": 63488 00:15:59.846 }, 00:15:59.846 { 00:15:59.846 "name": "BaseBdev3", 00:15:59.846 "uuid": "438c7a0c-9bb4-51a1-9993-4fad5a4d78cc", 00:15:59.846 "is_configured": true, 00:15:59.846 "data_offset": 2048, 00:15:59.846 "data_size": 63488 00:15:59.846 } 00:15:59.846 ] 00:15:59.846 }' 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.846 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82163 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82163 ']' 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82163 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82163 00:16:00.105 killing process with pid 82163 00:16:00.105 Received shutdown signal, test time was about 60.000000 seconds 00:16:00.105 00:16:00.105 Latency(us) 00:16:00.105 [2024-11-20T17:08:23.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.105 [2024-11-20T17:08:23.974Z] =================================================================================================================== 00:16:00.105 [2024-11-20T17:08:23.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82163' 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82163 00:16:00.105 [2024-11-20 17:08:23.781009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.105 17:08:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82163 00:16:00.105 [2024-11-20 17:08:23.781209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.105 [2024-11-20 17:08:23.781369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.105 [2024-11-20 17:08:23.781390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:00.363 [2024-11-20 17:08:24.084456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.303 ************************************ 00:16:01.304 END TEST raid5f_rebuild_test_sb 00:16:01.304 ************************************ 00:16:01.304 17:08:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:01.304 00:16:01.304 real 0m24.771s 00:16:01.304 user 0m33.119s 00:16:01.304 sys 0m2.538s 00:16:01.304 17:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.304 17:08:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 17:08:25 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:01.304 17:08:25 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:01.304 17:08:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:01.304 17:08:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.304 17:08:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 ************************************ 00:16:01.304 START TEST raid5f_state_function_test 00:16:01.304 ************************************ 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:01.304 Process raid pid: 82931 00:16:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82931 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82931' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82931 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82931 ']' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.304 17:08:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.563 [2024-11-20 17:08:25.262020] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:16:01.563 [2024-11-20 17:08:25.262232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.821 [2024-11-20 17:08:25.447529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.821 [2024-11-20 17:08:25.578425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.080 [2024-11-20 17:08:25.784552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.080 [2024-11-20 17:08:25.784593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.339 [2024-11-20 17:08:26.184631] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.339 [2024-11-20 17:08:26.184920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.339 [2024-11-20 17:08:26.184949] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.339 [2024-11-20 17:08:26.184967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.339 [2024-11-20 17:08:26.184978] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.339 [2024-11-20 17:08:26.184999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.339 [2024-11-20 17:08:26.185009] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.339 [2024-11-20 17:08:26.185028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.339 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.598 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.598 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.598 "name": "Existed_Raid", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "strip_size_kb": 64, 00:16:02.598 "state": "configuring", 00:16:02.598 "raid_level": "raid5f", 00:16:02.598 "superblock": false, 00:16:02.598 "num_base_bdevs": 4, 00:16:02.598 "num_base_bdevs_discovered": 0, 00:16:02.598 "num_base_bdevs_operational": 4, 00:16:02.598 "base_bdevs_list": [ 00:16:02.598 { 00:16:02.598 "name": "BaseBdev1", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "is_configured": false, 00:16:02.598 "data_offset": 0, 00:16:02.598 "data_size": 0 00:16:02.598 }, 00:16:02.598 { 00:16:02.598 "name": "BaseBdev2", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "is_configured": false, 00:16:02.598 "data_offset": 0, 00:16:02.598 "data_size": 0 00:16:02.598 }, 00:16:02.598 { 00:16:02.598 "name": "BaseBdev3", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "is_configured": false, 00:16:02.598 "data_offset": 0, 00:16:02.598 "data_size": 0 00:16:02.598 }, 00:16:02.598 { 00:16:02.598 "name": "BaseBdev4", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "is_configured": false, 00:16:02.598 "data_offset": 0, 00:16:02.598 "data_size": 0 00:16:02.598 } 00:16:02.598 ] 00:16:02.598 }' 00:16:02.598 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.598 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.857 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.857 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.857 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.858 [2024-11-20 17:08:26.700845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.858 [2024-11-20 17:08:26.701046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.858 [2024-11-20 17:08:26.708797] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.858 [2024-11-20 17:08:26.708982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.858 [2024-11-20 17:08:26.709008] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.858 [2024-11-20 17:08:26.709026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.858 [2024-11-20 17:08:26.709041] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.858 [2024-11-20 17:08:26.709055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.858 [2024-11-20 17:08:26.709064] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.858 [2024-11-20 17:08:26.709088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.858 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 [2024-11-20 17:08:26.755524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.117 BaseBdev1 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 [ 00:16:03.117 { 00:16:03.117 "name": "BaseBdev1", 00:16:03.117 "aliases": [ 00:16:03.117 "a63ddc29-b9b9-432a-aab1-05e91df6372a" 00:16:03.117 ], 00:16:03.117 "product_name": "Malloc disk", 00:16:03.117 "block_size": 512, 00:16:03.117 "num_blocks": 65536, 00:16:03.117 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:03.117 "assigned_rate_limits": { 00:16:03.117 "rw_ios_per_sec": 0, 00:16:03.117 "rw_mbytes_per_sec": 0, 00:16:03.117 "r_mbytes_per_sec": 0, 00:16:03.117 "w_mbytes_per_sec": 0 00:16:03.117 }, 00:16:03.117 "claimed": true, 00:16:03.117 "claim_type": "exclusive_write", 00:16:03.117 "zoned": false, 00:16:03.117 "supported_io_types": { 00:16:03.117 "read": true, 00:16:03.117 "write": true, 00:16:03.117 "unmap": true, 00:16:03.117 "flush": true, 00:16:03.117 "reset": true, 00:16:03.117 "nvme_admin": false, 00:16:03.117 "nvme_io": false, 00:16:03.117 "nvme_io_md": false, 00:16:03.117 "write_zeroes": true, 00:16:03.117 "zcopy": true, 00:16:03.117 "get_zone_info": false, 00:16:03.117 "zone_management": false, 00:16:03.117 "zone_append": false, 00:16:03.117 "compare": false, 00:16:03.117 "compare_and_write": false, 00:16:03.117 "abort": true, 00:16:03.117 "seek_hole": false, 00:16:03.117 "seek_data": false, 00:16:03.117 "copy": true, 00:16:03.117 "nvme_iov_md": false 00:16:03.117 }, 00:16:03.117 "memory_domains": [ 00:16:03.117 { 00:16:03.117 "dma_device_id": "system", 00:16:03.117 "dma_device_type": 1 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.117 "dma_device_type": 2 00:16:03.117 } 00:16:03.117 ], 00:16:03.117 "driver_specific": {} 00:16:03.117 } 00:16:03.117 ] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.117 "name": "Existed_Raid", 00:16:03.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.117 "strip_size_kb": 64, 00:16:03.117 "state": "configuring", 00:16:03.117 "raid_level": "raid5f", 00:16:03.117 "superblock": false, 00:16:03.117 "num_base_bdevs": 4, 00:16:03.117 "num_base_bdevs_discovered": 1, 00:16:03.117 "num_base_bdevs_operational": 4, 00:16:03.117 "base_bdevs_list": [ 00:16:03.117 { 00:16:03.117 "name": "BaseBdev1", 00:16:03.117 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:03.117 "is_configured": true, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 65536 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev2", 00:16:03.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.117 "is_configured": false, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 0 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev3", 00:16:03.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.117 "is_configured": false, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 0 00:16:03.117 }, 00:16:03.117 { 00:16:03.117 "name": "BaseBdev4", 00:16:03.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.117 "is_configured": false, 00:16:03.117 "data_offset": 0, 00:16:03.117 "data_size": 0 00:16:03.117 } 00:16:03.117 ] 00:16:03.117 }' 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.117 17:08:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 [2024-11-20 17:08:27.319724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.684 [2024-11-20 17:08:27.319797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 [2024-11-20 17:08:27.327784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.684 [2024-11-20 17:08:27.330400] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.684 [2024-11-20 17:08:27.330591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.684 [2024-11-20 17:08:27.330716] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.684 [2024-11-20 17:08:27.330809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.684 [2024-11-20 17:08:27.330939] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.684 [2024-11-20 17:08:27.330970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.684 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.685 "name": "Existed_Raid", 00:16:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.685 "strip_size_kb": 64, 00:16:03.685 "state": "configuring", 00:16:03.685 "raid_level": "raid5f", 00:16:03.685 "superblock": false, 00:16:03.685 "num_base_bdevs": 4, 00:16:03.685 "num_base_bdevs_discovered": 1, 00:16:03.685 "num_base_bdevs_operational": 4, 00:16:03.685 "base_bdevs_list": [ 00:16:03.685 { 00:16:03.685 "name": "BaseBdev1", 00:16:03.685 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:03.685 "is_configured": true, 00:16:03.685 "data_offset": 0, 00:16:03.685 "data_size": 65536 00:16:03.685 }, 00:16:03.685 { 00:16:03.685 "name": "BaseBdev2", 00:16:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.685 "is_configured": false, 00:16:03.685 "data_offset": 0, 00:16:03.685 "data_size": 0 00:16:03.685 }, 00:16:03.685 { 00:16:03.685 "name": "BaseBdev3", 00:16:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.685 "is_configured": false, 00:16:03.685 "data_offset": 0, 00:16:03.685 "data_size": 0 00:16:03.685 }, 00:16:03.685 { 00:16:03.685 "name": "BaseBdev4", 00:16:03.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.685 "is_configured": false, 00:16:03.685 "data_offset": 0, 00:16:03.685 "data_size": 0 00:16:03.685 } 00:16:03.685 ] 00:16:03.685 }' 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.685 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 [2024-11-20 17:08:27.901273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.252 BaseBdev2 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 [ 00:16:04.252 { 00:16:04.252 "name": "BaseBdev2", 00:16:04.252 "aliases": [ 00:16:04.252 "44d48571-7eec-4b14-9e3c-45594dcf074f" 00:16:04.252 ], 00:16:04.252 "product_name": "Malloc disk", 00:16:04.252 "block_size": 512, 00:16:04.252 "num_blocks": 65536, 00:16:04.252 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:04.252 "assigned_rate_limits": { 00:16:04.252 "rw_ios_per_sec": 0, 00:16:04.252 "rw_mbytes_per_sec": 0, 00:16:04.252 "r_mbytes_per_sec": 0, 00:16:04.252 "w_mbytes_per_sec": 0 00:16:04.252 }, 00:16:04.252 "claimed": true, 00:16:04.252 "claim_type": "exclusive_write", 00:16:04.252 "zoned": false, 00:16:04.252 "supported_io_types": { 00:16:04.252 "read": true, 00:16:04.252 "write": true, 00:16:04.252 "unmap": true, 00:16:04.252 "flush": true, 00:16:04.252 "reset": true, 00:16:04.252 "nvme_admin": false, 00:16:04.252 "nvme_io": false, 00:16:04.252 "nvme_io_md": false, 00:16:04.252 "write_zeroes": true, 00:16:04.252 "zcopy": true, 00:16:04.252 "get_zone_info": false, 00:16:04.252 "zone_management": false, 00:16:04.252 "zone_append": false, 00:16:04.252 "compare": false, 00:16:04.252 "compare_and_write": false, 00:16:04.252 "abort": true, 00:16:04.252 "seek_hole": false, 00:16:04.252 "seek_data": false, 00:16:04.252 "copy": true, 00:16:04.252 "nvme_iov_md": false 00:16:04.252 }, 00:16:04.252 "memory_domains": [ 00:16:04.252 { 00:16:04.252 "dma_device_id": "system", 00:16:04.252 "dma_device_type": 1 00:16:04.252 }, 00:16:04.252 { 00:16:04.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.252 "dma_device_type": 2 00:16:04.252 } 00:16:04.252 ], 00:16:04.252 "driver_specific": {} 00:16:04.252 } 00:16:04.252 ] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.252 "name": "Existed_Raid", 00:16:04.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.252 "strip_size_kb": 64, 00:16:04.252 "state": "configuring", 00:16:04.252 "raid_level": "raid5f", 00:16:04.252 "superblock": false, 00:16:04.252 "num_base_bdevs": 4, 00:16:04.252 "num_base_bdevs_discovered": 2, 00:16:04.252 "num_base_bdevs_operational": 4, 00:16:04.252 "base_bdevs_list": [ 00:16:04.252 { 00:16:04.252 "name": "BaseBdev1", 00:16:04.252 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:04.252 "is_configured": true, 00:16:04.252 "data_offset": 0, 00:16:04.252 "data_size": 65536 00:16:04.252 }, 00:16:04.252 { 00:16:04.252 "name": "BaseBdev2", 00:16:04.252 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:04.252 "is_configured": true, 00:16:04.252 "data_offset": 0, 00:16:04.252 "data_size": 65536 00:16:04.252 }, 00:16:04.252 { 00:16:04.252 "name": "BaseBdev3", 00:16:04.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.252 "is_configured": false, 00:16:04.252 "data_offset": 0, 00:16:04.252 "data_size": 0 00:16:04.252 }, 00:16:04.252 { 00:16:04.252 "name": "BaseBdev4", 00:16:04.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.252 "is_configured": false, 00:16:04.252 "data_offset": 0, 00:16:04.252 "data_size": 0 00:16:04.252 } 00:16:04.252 ] 00:16:04.252 }' 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.252 17:08:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.818 [2024-11-20 17:08:28.523207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.818 BaseBdev3 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:04.818 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.819 [ 00:16:04.819 { 00:16:04.819 "name": "BaseBdev3", 00:16:04.819 "aliases": [ 00:16:04.819 "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c" 00:16:04.819 ], 00:16:04.819 "product_name": "Malloc disk", 00:16:04.819 "block_size": 512, 00:16:04.819 "num_blocks": 65536, 00:16:04.819 "uuid": "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c", 00:16:04.819 "assigned_rate_limits": { 00:16:04.819 "rw_ios_per_sec": 0, 00:16:04.819 "rw_mbytes_per_sec": 0, 00:16:04.819 "r_mbytes_per_sec": 0, 00:16:04.819 "w_mbytes_per_sec": 0 00:16:04.819 }, 00:16:04.819 "claimed": true, 00:16:04.819 "claim_type": "exclusive_write", 00:16:04.819 "zoned": false, 00:16:04.819 "supported_io_types": { 00:16:04.819 "read": true, 00:16:04.819 "write": true, 00:16:04.819 "unmap": true, 00:16:04.819 "flush": true, 00:16:04.819 "reset": true, 00:16:04.819 "nvme_admin": false, 00:16:04.819 "nvme_io": false, 00:16:04.819 "nvme_io_md": false, 00:16:04.819 "write_zeroes": true, 00:16:04.819 "zcopy": true, 00:16:04.819 "get_zone_info": false, 00:16:04.819 "zone_management": false, 00:16:04.819 "zone_append": false, 00:16:04.819 "compare": false, 00:16:04.819 "compare_and_write": false, 00:16:04.819 "abort": true, 00:16:04.819 "seek_hole": false, 00:16:04.819 "seek_data": false, 00:16:04.819 "copy": true, 00:16:04.819 "nvme_iov_md": false 00:16:04.819 }, 00:16:04.819 "memory_domains": [ 00:16:04.819 { 00:16:04.819 "dma_device_id": "system", 00:16:04.819 "dma_device_type": 1 00:16:04.819 }, 00:16:04.819 { 00:16:04.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.819 "dma_device_type": 2 00:16:04.819 } 00:16:04.819 ], 00:16:04.819 "driver_specific": {} 00:16:04.819 } 00:16:04.819 ] 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.819 "name": "Existed_Raid", 00:16:04.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.819 "strip_size_kb": 64, 00:16:04.819 "state": "configuring", 00:16:04.819 "raid_level": "raid5f", 00:16:04.819 "superblock": false, 00:16:04.819 "num_base_bdevs": 4, 00:16:04.819 "num_base_bdevs_discovered": 3, 00:16:04.819 "num_base_bdevs_operational": 4, 00:16:04.819 "base_bdevs_list": [ 00:16:04.819 { 00:16:04.819 "name": "BaseBdev1", 00:16:04.819 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:04.819 "is_configured": true, 00:16:04.819 "data_offset": 0, 00:16:04.819 "data_size": 65536 00:16:04.819 }, 00:16:04.819 { 00:16:04.819 "name": "BaseBdev2", 00:16:04.819 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:04.819 "is_configured": true, 00:16:04.819 "data_offset": 0, 00:16:04.819 "data_size": 65536 00:16:04.819 }, 00:16:04.819 { 00:16:04.819 "name": "BaseBdev3", 00:16:04.819 "uuid": "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c", 00:16:04.819 "is_configured": true, 00:16:04.819 "data_offset": 0, 00:16:04.819 "data_size": 65536 00:16:04.819 }, 00:16:04.819 { 00:16:04.819 "name": "BaseBdev4", 00:16:04.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.819 "is_configured": false, 00:16:04.819 "data_offset": 0, 00:16:04.819 "data_size": 0 00:16:04.819 } 00:16:04.819 ] 00:16:04.819 }' 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.819 17:08:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [2024-11-20 17:08:29.126821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.387 [2024-11-20 17:08:29.126891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:05.387 [2024-11-20 17:08:29.126905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:05.387 [2024-11-20 17:08:29.127299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:05.387 [2024-11-20 17:08:29.134034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:05.387 [2024-11-20 17:08:29.134062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:05.387 [2024-11-20 17:08:29.134388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.387 BaseBdev4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 [ 00:16:05.387 { 00:16:05.387 "name": "BaseBdev4", 00:16:05.387 "aliases": [ 00:16:05.387 "01d662fd-c098-468d-8e58-e08cf013906c" 00:16:05.387 ], 00:16:05.387 "product_name": "Malloc disk", 00:16:05.387 "block_size": 512, 00:16:05.387 "num_blocks": 65536, 00:16:05.387 "uuid": "01d662fd-c098-468d-8e58-e08cf013906c", 00:16:05.387 "assigned_rate_limits": { 00:16:05.387 "rw_ios_per_sec": 0, 00:16:05.387 "rw_mbytes_per_sec": 0, 00:16:05.387 "r_mbytes_per_sec": 0, 00:16:05.387 "w_mbytes_per_sec": 0 00:16:05.387 }, 00:16:05.387 "claimed": true, 00:16:05.387 "claim_type": "exclusive_write", 00:16:05.387 "zoned": false, 00:16:05.387 "supported_io_types": { 00:16:05.387 "read": true, 00:16:05.387 "write": true, 00:16:05.387 "unmap": true, 00:16:05.387 "flush": true, 00:16:05.387 "reset": true, 00:16:05.387 "nvme_admin": false, 00:16:05.387 "nvme_io": false, 00:16:05.387 "nvme_io_md": false, 00:16:05.387 "write_zeroes": true, 00:16:05.387 "zcopy": true, 00:16:05.387 "get_zone_info": false, 00:16:05.387 "zone_management": false, 00:16:05.387 "zone_append": false, 00:16:05.387 "compare": false, 00:16:05.387 "compare_and_write": false, 00:16:05.387 "abort": true, 00:16:05.387 "seek_hole": false, 00:16:05.387 "seek_data": false, 00:16:05.387 "copy": true, 00:16:05.387 "nvme_iov_md": false 00:16:05.387 }, 00:16:05.387 "memory_domains": [ 00:16:05.387 { 00:16:05.387 "dma_device_id": "system", 00:16:05.387 "dma_device_type": 1 00:16:05.387 }, 00:16:05.387 { 00:16:05.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.387 "dma_device_type": 2 00:16:05.387 } 00:16:05.387 ], 00:16:05.387 "driver_specific": {} 00:16:05.387 } 00:16:05.387 ] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.387 "name": "Existed_Raid", 00:16:05.387 "uuid": "dc4a27fe-3784-4ad0-b882-a21093f4af1a", 00:16:05.387 "strip_size_kb": 64, 00:16:05.387 "state": "online", 00:16:05.387 "raid_level": "raid5f", 00:16:05.387 "superblock": false, 00:16:05.387 "num_base_bdevs": 4, 00:16:05.387 "num_base_bdevs_discovered": 4, 00:16:05.387 "num_base_bdevs_operational": 4, 00:16:05.387 "base_bdevs_list": [ 00:16:05.387 { 00:16:05.387 "name": "BaseBdev1", 00:16:05.387 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:05.387 "is_configured": true, 00:16:05.387 "data_offset": 0, 00:16:05.387 "data_size": 65536 00:16:05.387 }, 00:16:05.387 { 00:16:05.387 "name": "BaseBdev2", 00:16:05.387 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:05.387 "is_configured": true, 00:16:05.387 "data_offset": 0, 00:16:05.387 "data_size": 65536 00:16:05.387 }, 00:16:05.387 { 00:16:05.387 "name": "BaseBdev3", 00:16:05.387 "uuid": "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c", 00:16:05.387 "is_configured": true, 00:16:05.387 "data_offset": 0, 00:16:05.387 "data_size": 65536 00:16:05.387 }, 00:16:05.387 { 00:16:05.387 "name": "BaseBdev4", 00:16:05.387 "uuid": "01d662fd-c098-468d-8e58-e08cf013906c", 00:16:05.387 "is_configured": true, 00:16:05.387 "data_offset": 0, 00:16:05.387 "data_size": 65536 00:16:05.387 } 00:16:05.387 ] 00:16:05.387 }' 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.387 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.954 [2024-11-20 17:08:29.702466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:05.954 "name": "Existed_Raid", 00:16:05.954 "aliases": [ 00:16:05.954 "dc4a27fe-3784-4ad0-b882-a21093f4af1a" 00:16:05.954 ], 00:16:05.954 "product_name": "Raid Volume", 00:16:05.954 "block_size": 512, 00:16:05.954 "num_blocks": 196608, 00:16:05.954 "uuid": "dc4a27fe-3784-4ad0-b882-a21093f4af1a", 00:16:05.954 "assigned_rate_limits": { 00:16:05.954 "rw_ios_per_sec": 0, 00:16:05.954 "rw_mbytes_per_sec": 0, 00:16:05.954 "r_mbytes_per_sec": 0, 00:16:05.954 "w_mbytes_per_sec": 0 00:16:05.954 }, 00:16:05.954 "claimed": false, 00:16:05.954 "zoned": false, 00:16:05.954 "supported_io_types": { 00:16:05.954 "read": true, 00:16:05.954 "write": true, 00:16:05.954 "unmap": false, 00:16:05.954 "flush": false, 00:16:05.954 "reset": true, 00:16:05.954 "nvme_admin": false, 00:16:05.954 "nvme_io": false, 00:16:05.954 "nvme_io_md": false, 00:16:05.954 "write_zeroes": true, 00:16:05.954 "zcopy": false, 00:16:05.954 "get_zone_info": false, 00:16:05.954 "zone_management": false, 00:16:05.954 "zone_append": false, 00:16:05.954 "compare": false, 00:16:05.954 "compare_and_write": false, 00:16:05.954 "abort": false, 00:16:05.954 "seek_hole": false, 00:16:05.954 "seek_data": false, 00:16:05.954 "copy": false, 00:16:05.954 "nvme_iov_md": false 00:16:05.954 }, 00:16:05.954 "driver_specific": { 00:16:05.954 "raid": { 00:16:05.954 "uuid": "dc4a27fe-3784-4ad0-b882-a21093f4af1a", 00:16:05.954 "strip_size_kb": 64, 00:16:05.954 "state": "online", 00:16:05.954 "raid_level": "raid5f", 00:16:05.954 "superblock": false, 00:16:05.954 "num_base_bdevs": 4, 00:16:05.954 "num_base_bdevs_discovered": 4, 00:16:05.954 "num_base_bdevs_operational": 4, 00:16:05.954 "base_bdevs_list": [ 00:16:05.954 { 00:16:05.954 "name": "BaseBdev1", 00:16:05.954 "uuid": "a63ddc29-b9b9-432a-aab1-05e91df6372a", 00:16:05.954 "is_configured": true, 00:16:05.954 "data_offset": 0, 00:16:05.954 "data_size": 65536 00:16:05.954 }, 00:16:05.954 { 00:16:05.954 "name": "BaseBdev2", 00:16:05.954 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:05.954 "is_configured": true, 00:16:05.954 "data_offset": 0, 00:16:05.954 "data_size": 65536 00:16:05.954 }, 00:16:05.954 { 00:16:05.954 "name": "BaseBdev3", 00:16:05.954 "uuid": "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c", 00:16:05.954 "is_configured": true, 00:16:05.954 "data_offset": 0, 00:16:05.954 "data_size": 65536 00:16:05.954 }, 00:16:05.954 { 00:16:05.954 "name": "BaseBdev4", 00:16:05.954 "uuid": "01d662fd-c098-468d-8e58-e08cf013906c", 00:16:05.954 "is_configured": true, 00:16:05.954 "data_offset": 0, 00:16:05.954 "data_size": 65536 00:16:05.954 } 00:16:05.954 ] 00:16:05.954 } 00:16:05.954 } 00:16:05.954 }' 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:05.954 BaseBdev2 00:16:05.954 BaseBdev3 00:16:05.954 BaseBdev4' 00:16:05.954 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 17:08:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.213 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.472 [2024-11-20 17:08:30.078428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.472 "name": "Existed_Raid", 00:16:06.472 "uuid": "dc4a27fe-3784-4ad0-b882-a21093f4af1a", 00:16:06.472 "strip_size_kb": 64, 00:16:06.472 "state": "online", 00:16:06.472 "raid_level": "raid5f", 00:16:06.472 "superblock": false, 00:16:06.472 "num_base_bdevs": 4, 00:16:06.472 "num_base_bdevs_discovered": 3, 00:16:06.472 "num_base_bdevs_operational": 3, 00:16:06.472 "base_bdevs_list": [ 00:16:06.472 { 00:16:06.472 "name": null, 00:16:06.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.472 "is_configured": false, 00:16:06.472 "data_offset": 0, 00:16:06.472 "data_size": 65536 00:16:06.472 }, 00:16:06.472 { 00:16:06.472 "name": "BaseBdev2", 00:16:06.472 "uuid": "44d48571-7eec-4b14-9e3c-45594dcf074f", 00:16:06.472 "is_configured": true, 00:16:06.472 "data_offset": 0, 00:16:06.472 "data_size": 65536 00:16:06.472 }, 00:16:06.472 { 00:16:06.472 "name": "BaseBdev3", 00:16:06.472 "uuid": "ca4f8210-f0fd-43f9-bd4f-07e9eb34a03c", 00:16:06.472 "is_configured": true, 00:16:06.472 "data_offset": 0, 00:16:06.472 "data_size": 65536 00:16:06.472 }, 00:16:06.472 { 00:16:06.472 "name": "BaseBdev4", 00:16:06.472 "uuid": "01d662fd-c098-468d-8e58-e08cf013906c", 00:16:06.472 "is_configured": true, 00:16:06.472 "data_offset": 0, 00:16:06.472 "data_size": 65536 00:16:06.472 } 00:16:06.472 ] 00:16:06.472 }' 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.472 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.040 [2024-11-20 17:08:30.733990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.040 [2024-11-20 17:08:30.734189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.040 [2024-11-20 17:08:30.815702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.040 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.040 [2024-11-20 17:08:30.879717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.299 17:08:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.299 [2024-11-20 17:08:31.016853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:07.299 [2024-11-20 17:08:31.017088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.299 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.559 BaseBdev2 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.559 [ 00:16:07.559 { 00:16:07.559 "name": "BaseBdev2", 00:16:07.559 "aliases": [ 00:16:07.559 "cb0325ab-f39a-482d-8e32-9cf58fa75d2c" 00:16:07.559 ], 00:16:07.559 "product_name": "Malloc disk", 00:16:07.559 "block_size": 512, 00:16:07.559 "num_blocks": 65536, 00:16:07.559 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:07.559 "assigned_rate_limits": { 00:16:07.559 "rw_ios_per_sec": 0, 00:16:07.559 "rw_mbytes_per_sec": 0, 00:16:07.559 "r_mbytes_per_sec": 0, 00:16:07.559 "w_mbytes_per_sec": 0 00:16:07.559 }, 00:16:07.559 "claimed": false, 00:16:07.559 "zoned": false, 00:16:07.559 "supported_io_types": { 00:16:07.559 "read": true, 00:16:07.559 "write": true, 00:16:07.559 "unmap": true, 00:16:07.559 "flush": true, 00:16:07.559 "reset": true, 00:16:07.559 "nvme_admin": false, 00:16:07.559 "nvme_io": false, 00:16:07.559 "nvme_io_md": false, 00:16:07.559 "write_zeroes": true, 00:16:07.559 "zcopy": true, 00:16:07.559 "get_zone_info": false, 00:16:07.559 "zone_management": false, 00:16:07.559 "zone_append": false, 00:16:07.559 "compare": false, 00:16:07.559 "compare_and_write": false, 00:16:07.559 "abort": true, 00:16:07.559 "seek_hole": false, 00:16:07.559 "seek_data": false, 00:16:07.559 "copy": true, 00:16:07.559 "nvme_iov_md": false 00:16:07.559 }, 00:16:07.559 "memory_domains": [ 00:16:07.559 { 00:16:07.559 "dma_device_id": "system", 00:16:07.559 "dma_device_type": 1 00:16:07.559 }, 00:16:07.559 { 00:16:07.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.559 "dma_device_type": 2 00:16:07.559 } 00:16:07.559 ], 00:16:07.559 "driver_specific": {} 00:16:07.559 } 00:16:07.559 ] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.559 BaseBdev3 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.559 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 [ 00:16:07.560 { 00:16:07.560 "name": "BaseBdev3", 00:16:07.560 "aliases": [ 00:16:07.560 "68a84c32-4a2a-45e7-8efd-198e556319ce" 00:16:07.560 ], 00:16:07.560 "product_name": "Malloc disk", 00:16:07.560 "block_size": 512, 00:16:07.560 "num_blocks": 65536, 00:16:07.560 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:07.560 "assigned_rate_limits": { 00:16:07.560 "rw_ios_per_sec": 0, 00:16:07.560 "rw_mbytes_per_sec": 0, 00:16:07.560 "r_mbytes_per_sec": 0, 00:16:07.560 "w_mbytes_per_sec": 0 00:16:07.560 }, 00:16:07.560 "claimed": false, 00:16:07.560 "zoned": false, 00:16:07.560 "supported_io_types": { 00:16:07.560 "read": true, 00:16:07.560 "write": true, 00:16:07.560 "unmap": true, 00:16:07.560 "flush": true, 00:16:07.560 "reset": true, 00:16:07.560 "nvme_admin": false, 00:16:07.560 "nvme_io": false, 00:16:07.560 "nvme_io_md": false, 00:16:07.560 "write_zeroes": true, 00:16:07.560 "zcopy": true, 00:16:07.560 "get_zone_info": false, 00:16:07.560 "zone_management": false, 00:16:07.560 "zone_append": false, 00:16:07.560 "compare": false, 00:16:07.560 "compare_and_write": false, 00:16:07.560 "abort": true, 00:16:07.560 "seek_hole": false, 00:16:07.560 "seek_data": false, 00:16:07.560 "copy": true, 00:16:07.560 "nvme_iov_md": false 00:16:07.560 }, 00:16:07.560 "memory_domains": [ 00:16:07.560 { 00:16:07.560 "dma_device_id": "system", 00:16:07.560 "dma_device_type": 1 00:16:07.560 }, 00:16:07.560 { 00:16:07.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.560 "dma_device_type": 2 00:16:07.560 } 00:16:07.560 ], 00:16:07.560 "driver_specific": {} 00:16:07.560 } 00:16:07.560 ] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 BaseBdev4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 [ 00:16:07.560 { 00:16:07.560 "name": "BaseBdev4", 00:16:07.560 "aliases": [ 00:16:07.560 "cd610831-84c2-43f0-87f9-a3de93a81f52" 00:16:07.560 ], 00:16:07.560 "product_name": "Malloc disk", 00:16:07.560 "block_size": 512, 00:16:07.560 "num_blocks": 65536, 00:16:07.560 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:07.560 "assigned_rate_limits": { 00:16:07.560 "rw_ios_per_sec": 0, 00:16:07.560 "rw_mbytes_per_sec": 0, 00:16:07.560 "r_mbytes_per_sec": 0, 00:16:07.560 "w_mbytes_per_sec": 0 00:16:07.560 }, 00:16:07.560 "claimed": false, 00:16:07.560 "zoned": false, 00:16:07.560 "supported_io_types": { 00:16:07.560 "read": true, 00:16:07.560 "write": true, 00:16:07.560 "unmap": true, 00:16:07.560 "flush": true, 00:16:07.560 "reset": true, 00:16:07.560 "nvme_admin": false, 00:16:07.560 "nvme_io": false, 00:16:07.560 "nvme_io_md": false, 00:16:07.560 "write_zeroes": true, 00:16:07.560 "zcopy": true, 00:16:07.560 "get_zone_info": false, 00:16:07.560 "zone_management": false, 00:16:07.560 "zone_append": false, 00:16:07.560 "compare": false, 00:16:07.560 "compare_and_write": false, 00:16:07.560 "abort": true, 00:16:07.560 "seek_hole": false, 00:16:07.560 "seek_data": false, 00:16:07.560 "copy": true, 00:16:07.560 "nvme_iov_md": false 00:16:07.560 }, 00:16:07.560 "memory_domains": [ 00:16:07.560 { 00:16:07.560 "dma_device_id": "system", 00:16:07.560 "dma_device_type": 1 00:16:07.560 }, 00:16:07.560 { 00:16:07.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.560 "dma_device_type": 2 00:16:07.560 } 00:16:07.560 ], 00:16:07.560 "driver_specific": {} 00:16:07.560 } 00:16:07.560 ] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 [2024-11-20 17:08:31.373276] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.560 [2024-11-20 17:08:31.373479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.560 [2024-11-20 17:08:31.373522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.560 [2024-11-20 17:08:31.376247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.560 [2024-11-20 17:08:31.376471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.560 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.820 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.820 "name": "Existed_Raid", 00:16:07.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.820 "strip_size_kb": 64, 00:16:07.820 "state": "configuring", 00:16:07.820 "raid_level": "raid5f", 00:16:07.820 "superblock": false, 00:16:07.820 "num_base_bdevs": 4, 00:16:07.820 "num_base_bdevs_discovered": 3, 00:16:07.820 "num_base_bdevs_operational": 4, 00:16:07.820 "base_bdevs_list": [ 00:16:07.820 { 00:16:07.820 "name": "BaseBdev1", 00:16:07.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.820 "is_configured": false, 00:16:07.820 "data_offset": 0, 00:16:07.820 "data_size": 0 00:16:07.820 }, 00:16:07.820 { 00:16:07.820 "name": "BaseBdev2", 00:16:07.820 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:07.820 "is_configured": true, 00:16:07.820 "data_offset": 0, 00:16:07.820 "data_size": 65536 00:16:07.820 }, 00:16:07.820 { 00:16:07.820 "name": "BaseBdev3", 00:16:07.820 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:07.820 "is_configured": true, 00:16:07.820 "data_offset": 0, 00:16:07.820 "data_size": 65536 00:16:07.820 }, 00:16:07.820 { 00:16:07.820 "name": "BaseBdev4", 00:16:07.820 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:07.820 "is_configured": true, 00:16:07.820 "data_offset": 0, 00:16:07.820 "data_size": 65536 00:16:07.820 } 00:16:07.820 ] 00:16:07.820 }' 00:16:07.820 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.820 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 [2024-11-20 17:08:31.909551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.336 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.336 "name": "Existed_Raid", 00:16:08.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.336 "strip_size_kb": 64, 00:16:08.336 "state": "configuring", 00:16:08.336 "raid_level": "raid5f", 00:16:08.336 "superblock": false, 00:16:08.336 "num_base_bdevs": 4, 00:16:08.336 "num_base_bdevs_discovered": 2, 00:16:08.336 "num_base_bdevs_operational": 4, 00:16:08.336 "base_bdevs_list": [ 00:16:08.336 { 00:16:08.336 "name": "BaseBdev1", 00:16:08.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.336 "is_configured": false, 00:16:08.336 "data_offset": 0, 00:16:08.336 "data_size": 0 00:16:08.336 }, 00:16:08.336 { 00:16:08.336 "name": null, 00:16:08.336 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:08.336 "is_configured": false, 00:16:08.336 "data_offset": 0, 00:16:08.336 "data_size": 65536 00:16:08.336 }, 00:16:08.336 { 00:16:08.336 "name": "BaseBdev3", 00:16:08.336 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:08.336 "is_configured": true, 00:16:08.336 "data_offset": 0, 00:16:08.336 "data_size": 65536 00:16:08.336 }, 00:16:08.336 { 00:16:08.336 "name": "BaseBdev4", 00:16:08.336 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:08.336 "is_configured": true, 00:16:08.336 "data_offset": 0, 00:16:08.336 "data_size": 65536 00:16:08.336 } 00:16:08.336 ] 00:16:08.336 }' 00:16:08.336 17:08:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.336 17:08:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.595 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.595 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.595 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.595 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.595 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.876 [2024-11-20 17:08:32.530747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.876 BaseBdev1 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.876 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.876 [ 00:16:08.876 { 00:16:08.876 "name": "BaseBdev1", 00:16:08.876 "aliases": [ 00:16:08.876 "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc" 00:16:08.876 ], 00:16:08.876 "product_name": "Malloc disk", 00:16:08.876 "block_size": 512, 00:16:08.876 "num_blocks": 65536, 00:16:08.876 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:08.876 "assigned_rate_limits": { 00:16:08.876 "rw_ios_per_sec": 0, 00:16:08.876 "rw_mbytes_per_sec": 0, 00:16:08.876 "r_mbytes_per_sec": 0, 00:16:08.876 "w_mbytes_per_sec": 0 00:16:08.876 }, 00:16:08.876 "claimed": true, 00:16:08.876 "claim_type": "exclusive_write", 00:16:08.876 "zoned": false, 00:16:08.876 "supported_io_types": { 00:16:08.876 "read": true, 00:16:08.876 "write": true, 00:16:08.876 "unmap": true, 00:16:08.876 "flush": true, 00:16:08.876 "reset": true, 00:16:08.876 "nvme_admin": false, 00:16:08.876 "nvme_io": false, 00:16:08.876 "nvme_io_md": false, 00:16:08.876 "write_zeroes": true, 00:16:08.876 "zcopy": true, 00:16:08.876 "get_zone_info": false, 00:16:08.876 "zone_management": false, 00:16:08.876 "zone_append": false, 00:16:08.876 "compare": false, 00:16:08.876 "compare_and_write": false, 00:16:08.876 "abort": true, 00:16:08.876 "seek_hole": false, 00:16:08.876 "seek_data": false, 00:16:08.876 "copy": true, 00:16:08.876 "nvme_iov_md": false 00:16:08.876 }, 00:16:08.876 "memory_domains": [ 00:16:08.876 { 00:16:08.876 "dma_device_id": "system", 00:16:08.876 "dma_device_type": 1 00:16:08.876 }, 00:16:08.876 { 00:16:08.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.876 "dma_device_type": 2 00:16:08.876 } 00:16:08.876 ], 00:16:08.876 "driver_specific": {} 00:16:08.877 } 00:16:08.877 ] 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.877 "name": "Existed_Raid", 00:16:08.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.877 "strip_size_kb": 64, 00:16:08.877 "state": "configuring", 00:16:08.877 "raid_level": "raid5f", 00:16:08.877 "superblock": false, 00:16:08.877 "num_base_bdevs": 4, 00:16:08.877 "num_base_bdevs_discovered": 3, 00:16:08.877 "num_base_bdevs_operational": 4, 00:16:08.877 "base_bdevs_list": [ 00:16:08.877 { 00:16:08.877 "name": "BaseBdev1", 00:16:08.877 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:08.877 "is_configured": true, 00:16:08.877 "data_offset": 0, 00:16:08.877 "data_size": 65536 00:16:08.877 }, 00:16:08.877 { 00:16:08.877 "name": null, 00:16:08.877 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:08.877 "is_configured": false, 00:16:08.877 "data_offset": 0, 00:16:08.877 "data_size": 65536 00:16:08.877 }, 00:16:08.877 { 00:16:08.877 "name": "BaseBdev3", 00:16:08.877 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:08.877 "is_configured": true, 00:16:08.877 "data_offset": 0, 00:16:08.877 "data_size": 65536 00:16:08.877 }, 00:16:08.877 { 00:16:08.877 "name": "BaseBdev4", 00:16:08.877 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:08.877 "is_configured": true, 00:16:08.877 "data_offset": 0, 00:16:08.877 "data_size": 65536 00:16:08.877 } 00:16:08.877 ] 00:16:08.877 }' 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.877 17:08:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 [2024-11-20 17:08:33.147057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.444 "name": "Existed_Raid", 00:16:09.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.444 "strip_size_kb": 64, 00:16:09.444 "state": "configuring", 00:16:09.444 "raid_level": "raid5f", 00:16:09.444 "superblock": false, 00:16:09.444 "num_base_bdevs": 4, 00:16:09.444 "num_base_bdevs_discovered": 2, 00:16:09.444 "num_base_bdevs_operational": 4, 00:16:09.444 "base_bdevs_list": [ 00:16:09.444 { 00:16:09.444 "name": "BaseBdev1", 00:16:09.444 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:09.444 "is_configured": true, 00:16:09.444 "data_offset": 0, 00:16:09.444 "data_size": 65536 00:16:09.444 }, 00:16:09.444 { 00:16:09.444 "name": null, 00:16:09.444 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:09.444 "is_configured": false, 00:16:09.444 "data_offset": 0, 00:16:09.444 "data_size": 65536 00:16:09.444 }, 00:16:09.444 { 00:16:09.444 "name": null, 00:16:09.444 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:09.444 "is_configured": false, 00:16:09.444 "data_offset": 0, 00:16:09.444 "data_size": 65536 00:16:09.444 }, 00:16:09.444 { 00:16:09.444 "name": "BaseBdev4", 00:16:09.444 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:09.444 "is_configured": true, 00:16:09.444 "data_offset": 0, 00:16:09.444 "data_size": 65536 00:16:09.444 } 00:16:09.444 ] 00:16:09.444 }' 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.444 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.012 [2024-11-20 17:08:33.767335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.012 "name": "Existed_Raid", 00:16:10.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.012 "strip_size_kb": 64, 00:16:10.012 "state": "configuring", 00:16:10.012 "raid_level": "raid5f", 00:16:10.012 "superblock": false, 00:16:10.012 "num_base_bdevs": 4, 00:16:10.012 "num_base_bdevs_discovered": 3, 00:16:10.012 "num_base_bdevs_operational": 4, 00:16:10.012 "base_bdevs_list": [ 00:16:10.012 { 00:16:10.012 "name": "BaseBdev1", 00:16:10.012 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:10.012 "is_configured": true, 00:16:10.012 "data_offset": 0, 00:16:10.012 "data_size": 65536 00:16:10.012 }, 00:16:10.012 { 00:16:10.012 "name": null, 00:16:10.012 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:10.012 "is_configured": false, 00:16:10.012 "data_offset": 0, 00:16:10.012 "data_size": 65536 00:16:10.012 }, 00:16:10.012 { 00:16:10.012 "name": "BaseBdev3", 00:16:10.012 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:10.012 "is_configured": true, 00:16:10.012 "data_offset": 0, 00:16:10.012 "data_size": 65536 00:16:10.012 }, 00:16:10.012 { 00:16:10.012 "name": "BaseBdev4", 00:16:10.012 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:10.012 "is_configured": true, 00:16:10.012 "data_offset": 0, 00:16:10.012 "data_size": 65536 00:16:10.012 } 00:16:10.012 ] 00:16:10.012 }' 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.012 17:08:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.580 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.580 [2024-11-20 17:08:34.395677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.838 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.839 "name": "Existed_Raid", 00:16:10.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.839 "strip_size_kb": 64, 00:16:10.839 "state": "configuring", 00:16:10.839 "raid_level": "raid5f", 00:16:10.839 "superblock": false, 00:16:10.839 "num_base_bdevs": 4, 00:16:10.839 "num_base_bdevs_discovered": 2, 00:16:10.839 "num_base_bdevs_operational": 4, 00:16:10.839 "base_bdevs_list": [ 00:16:10.839 { 00:16:10.839 "name": null, 00:16:10.839 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:10.839 "is_configured": false, 00:16:10.839 "data_offset": 0, 00:16:10.839 "data_size": 65536 00:16:10.839 }, 00:16:10.839 { 00:16:10.839 "name": null, 00:16:10.839 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:10.839 "is_configured": false, 00:16:10.839 "data_offset": 0, 00:16:10.839 "data_size": 65536 00:16:10.839 }, 00:16:10.839 { 00:16:10.839 "name": "BaseBdev3", 00:16:10.839 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:10.839 "is_configured": true, 00:16:10.839 "data_offset": 0, 00:16:10.839 "data_size": 65536 00:16:10.839 }, 00:16:10.839 { 00:16:10.839 "name": "BaseBdev4", 00:16:10.839 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:10.839 "is_configured": true, 00:16:10.839 "data_offset": 0, 00:16:10.839 "data_size": 65536 00:16:10.839 } 00:16:10.839 ] 00:16:10.839 }' 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.839 17:08:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.407 [2024-11-20 17:08:35.106892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.407 "name": "Existed_Raid", 00:16:11.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.407 "strip_size_kb": 64, 00:16:11.407 "state": "configuring", 00:16:11.407 "raid_level": "raid5f", 00:16:11.407 "superblock": false, 00:16:11.407 "num_base_bdevs": 4, 00:16:11.407 "num_base_bdevs_discovered": 3, 00:16:11.407 "num_base_bdevs_operational": 4, 00:16:11.407 "base_bdevs_list": [ 00:16:11.407 { 00:16:11.407 "name": null, 00:16:11.407 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:11.407 "is_configured": false, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev2", 00:16:11.407 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev3", 00:16:11.407 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 }, 00:16:11.407 { 00:16:11.407 "name": "BaseBdev4", 00:16:11.407 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:11.407 "is_configured": true, 00:16:11.407 "data_offset": 0, 00:16:11.407 "data_size": 65536 00:16:11.407 } 00:16:11.407 ] 00:16:11.407 }' 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.407 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.975 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.234 [2024-11-20 17:08:35.853947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:12.234 [2024-11-20 17:08:35.854014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:12.234 [2024-11-20 17:08:35.854025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:12.234 [2024-11-20 17:08:35.854379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:12.234 [2024-11-20 17:08:35.860787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:12.234 [2024-11-20 17:08:35.861010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:12.234 [2024-11-20 17:08:35.861385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.234 NewBaseBdev 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.234 [ 00:16:12.234 { 00:16:12.234 "name": "NewBaseBdev", 00:16:12.234 "aliases": [ 00:16:12.234 "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc" 00:16:12.234 ], 00:16:12.234 "product_name": "Malloc disk", 00:16:12.234 "block_size": 512, 00:16:12.234 "num_blocks": 65536, 00:16:12.234 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:12.234 "assigned_rate_limits": { 00:16:12.234 "rw_ios_per_sec": 0, 00:16:12.234 "rw_mbytes_per_sec": 0, 00:16:12.234 "r_mbytes_per_sec": 0, 00:16:12.234 "w_mbytes_per_sec": 0 00:16:12.234 }, 00:16:12.234 "claimed": true, 00:16:12.234 "claim_type": "exclusive_write", 00:16:12.234 "zoned": false, 00:16:12.234 "supported_io_types": { 00:16:12.234 "read": true, 00:16:12.234 "write": true, 00:16:12.234 "unmap": true, 00:16:12.234 "flush": true, 00:16:12.234 "reset": true, 00:16:12.234 "nvme_admin": false, 00:16:12.234 "nvme_io": false, 00:16:12.234 "nvme_io_md": false, 00:16:12.234 "write_zeroes": true, 00:16:12.234 "zcopy": true, 00:16:12.234 "get_zone_info": false, 00:16:12.234 "zone_management": false, 00:16:12.234 "zone_append": false, 00:16:12.234 "compare": false, 00:16:12.234 "compare_and_write": false, 00:16:12.234 "abort": true, 00:16:12.234 "seek_hole": false, 00:16:12.234 "seek_data": false, 00:16:12.234 "copy": true, 00:16:12.234 "nvme_iov_md": false 00:16:12.234 }, 00:16:12.234 "memory_domains": [ 00:16:12.234 { 00:16:12.234 "dma_device_id": "system", 00:16:12.234 "dma_device_type": 1 00:16:12.234 }, 00:16:12.234 { 00:16:12.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.234 "dma_device_type": 2 00:16:12.234 } 00:16:12.234 ], 00:16:12.234 "driver_specific": {} 00:16:12.234 } 00:16:12.234 ] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.234 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.234 "name": "Existed_Raid", 00:16:12.234 "uuid": "102c3f95-a758-4b40-94a6-c2d1b054e137", 00:16:12.234 "strip_size_kb": 64, 00:16:12.234 "state": "online", 00:16:12.234 "raid_level": "raid5f", 00:16:12.234 "superblock": false, 00:16:12.234 "num_base_bdevs": 4, 00:16:12.234 "num_base_bdevs_discovered": 4, 00:16:12.235 "num_base_bdevs_operational": 4, 00:16:12.235 "base_bdevs_list": [ 00:16:12.235 { 00:16:12.235 "name": "NewBaseBdev", 00:16:12.235 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 0, 00:16:12.235 "data_size": 65536 00:16:12.235 }, 00:16:12.235 { 00:16:12.235 "name": "BaseBdev2", 00:16:12.235 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 0, 00:16:12.235 "data_size": 65536 00:16:12.235 }, 00:16:12.235 { 00:16:12.235 "name": "BaseBdev3", 00:16:12.235 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 0, 00:16:12.235 "data_size": 65536 00:16:12.235 }, 00:16:12.235 { 00:16:12.235 "name": "BaseBdev4", 00:16:12.235 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 0, 00:16:12.235 "data_size": 65536 00:16:12.235 } 00:16:12.235 ] 00:16:12.235 }' 00:16:12.235 17:08:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.235 17:08:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.802 [2024-11-20 17:08:36.469186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.802 "name": "Existed_Raid", 00:16:12.802 "aliases": [ 00:16:12.802 "102c3f95-a758-4b40-94a6-c2d1b054e137" 00:16:12.802 ], 00:16:12.802 "product_name": "Raid Volume", 00:16:12.802 "block_size": 512, 00:16:12.802 "num_blocks": 196608, 00:16:12.802 "uuid": "102c3f95-a758-4b40-94a6-c2d1b054e137", 00:16:12.802 "assigned_rate_limits": { 00:16:12.802 "rw_ios_per_sec": 0, 00:16:12.802 "rw_mbytes_per_sec": 0, 00:16:12.802 "r_mbytes_per_sec": 0, 00:16:12.802 "w_mbytes_per_sec": 0 00:16:12.802 }, 00:16:12.802 "claimed": false, 00:16:12.802 "zoned": false, 00:16:12.802 "supported_io_types": { 00:16:12.802 "read": true, 00:16:12.802 "write": true, 00:16:12.802 "unmap": false, 00:16:12.802 "flush": false, 00:16:12.802 "reset": true, 00:16:12.802 "nvme_admin": false, 00:16:12.802 "nvme_io": false, 00:16:12.802 "nvme_io_md": false, 00:16:12.802 "write_zeroes": true, 00:16:12.802 "zcopy": false, 00:16:12.802 "get_zone_info": false, 00:16:12.802 "zone_management": false, 00:16:12.802 "zone_append": false, 00:16:12.802 "compare": false, 00:16:12.802 "compare_and_write": false, 00:16:12.802 "abort": false, 00:16:12.802 "seek_hole": false, 00:16:12.802 "seek_data": false, 00:16:12.802 "copy": false, 00:16:12.802 "nvme_iov_md": false 00:16:12.802 }, 00:16:12.802 "driver_specific": { 00:16:12.802 "raid": { 00:16:12.802 "uuid": "102c3f95-a758-4b40-94a6-c2d1b054e137", 00:16:12.802 "strip_size_kb": 64, 00:16:12.802 "state": "online", 00:16:12.802 "raid_level": "raid5f", 00:16:12.802 "superblock": false, 00:16:12.802 "num_base_bdevs": 4, 00:16:12.802 "num_base_bdevs_discovered": 4, 00:16:12.802 "num_base_bdevs_operational": 4, 00:16:12.802 "base_bdevs_list": [ 00:16:12.802 { 00:16:12.802 "name": "NewBaseBdev", 00:16:12.802 "uuid": "d0b0bd7e-23b2-4d3d-8785-f7ab42dccecc", 00:16:12.802 "is_configured": true, 00:16:12.802 "data_offset": 0, 00:16:12.802 "data_size": 65536 00:16:12.802 }, 00:16:12.802 { 00:16:12.802 "name": "BaseBdev2", 00:16:12.802 "uuid": "cb0325ab-f39a-482d-8e32-9cf58fa75d2c", 00:16:12.802 "is_configured": true, 00:16:12.802 "data_offset": 0, 00:16:12.802 "data_size": 65536 00:16:12.802 }, 00:16:12.802 { 00:16:12.802 "name": "BaseBdev3", 00:16:12.802 "uuid": "68a84c32-4a2a-45e7-8efd-198e556319ce", 00:16:12.802 "is_configured": true, 00:16:12.802 "data_offset": 0, 00:16:12.802 "data_size": 65536 00:16:12.802 }, 00:16:12.802 { 00:16:12.802 "name": "BaseBdev4", 00:16:12.802 "uuid": "cd610831-84c2-43f0-87f9-a3de93a81f52", 00:16:12.802 "is_configured": true, 00:16:12.802 "data_offset": 0, 00:16:12.802 "data_size": 65536 00:16:12.802 } 00:16:12.802 ] 00:16:12.802 } 00:16:12.802 } 00:16:12.802 }' 00:16:12.802 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:12.803 BaseBdev2 00:16:12.803 BaseBdev3 00:16:12.803 BaseBdev4' 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.803 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.062 [2024-11-20 17:08:36.844996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.062 [2024-11-20 17:08:36.845031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.062 [2024-11-20 17:08:36.845128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.062 [2024-11-20 17:08:36.845529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.062 [2024-11-20 17:08:36.845546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82931 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82931 ']' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82931 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82931 00:16:13.062 killing process with pid 82931 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82931' 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82931 00:16:13.062 [2024-11-20 17:08:36.884290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.062 17:08:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82931 00:16:13.630 [2024-11-20 17:08:37.200709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.565 ************************************ 00:16:14.565 END TEST raid5f_state_function_test 00:16:14.565 ************************************ 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.565 00:16:14.565 real 0m13.068s 00:16:14.565 user 0m21.887s 00:16:14.565 sys 0m1.819s 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 17:08:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:14.565 17:08:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:14.565 17:08:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.565 17:08:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 ************************************ 00:16:14.565 START TEST raid5f_state_function_test_sb 00:16:14.565 ************************************ 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:14.565 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:14.566 Process raid pid: 83608 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83608 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83608' 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83608 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83608 ']' 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.566 17:08:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 [2024-11-20 17:08:38.374908] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:16:14.566 [2024-11-20 17:08:38.375087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.824 [2024-11-20 17:08:38.562236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.083 [2024-11-20 17:08:38.696992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.083 [2024-11-20 17:08:38.905835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.083 [2024-11-20 17:08:38.905891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.649 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.649 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:15.649 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:15.649 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.649 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.649 [2024-11-20 17:08:39.390617] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.649 [2024-11-20 17:08:39.390689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.649 [2024-11-20 17:08:39.390706] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.649 [2024-11-20 17:08:39.390738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.650 [2024-11-20 17:08:39.390749] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.650 [2024-11-20 17:08:39.390780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.650 [2024-11-20 17:08:39.390790] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:15.650 [2024-11-20 17:08:39.390824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.650 "name": "Existed_Raid", 00:16:15.650 "uuid": "539ed64c-75cd-44b3-98f4-5e1531f4a3bc", 00:16:15.650 "strip_size_kb": 64, 00:16:15.650 "state": "configuring", 00:16:15.650 "raid_level": "raid5f", 00:16:15.650 "superblock": true, 00:16:15.650 "num_base_bdevs": 4, 00:16:15.650 "num_base_bdevs_discovered": 0, 00:16:15.650 "num_base_bdevs_operational": 4, 00:16:15.650 "base_bdevs_list": [ 00:16:15.650 { 00:16:15.650 "name": "BaseBdev1", 00:16:15.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.650 "is_configured": false, 00:16:15.650 "data_offset": 0, 00:16:15.650 "data_size": 0 00:16:15.650 }, 00:16:15.650 { 00:16:15.650 "name": "BaseBdev2", 00:16:15.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.650 "is_configured": false, 00:16:15.650 "data_offset": 0, 00:16:15.650 "data_size": 0 00:16:15.650 }, 00:16:15.650 { 00:16:15.650 "name": "BaseBdev3", 00:16:15.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.650 "is_configured": false, 00:16:15.650 "data_offset": 0, 00:16:15.650 "data_size": 0 00:16:15.650 }, 00:16:15.650 { 00:16:15.650 "name": "BaseBdev4", 00:16:15.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.650 "is_configured": false, 00:16:15.650 "data_offset": 0, 00:16:15.650 "data_size": 0 00:16:15.650 } 00:16:15.650 ] 00:16:15.650 }' 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.650 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 [2024-11-20 17:08:39.918701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.216 [2024-11-20 17:08:39.918917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.216 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 [2024-11-20 17:08:39.926688] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.216 [2024-11-20 17:08:39.926736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.216 [2024-11-20 17:08:39.926767] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.216 [2024-11-20 17:08:39.926821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.216 [2024-11-20 17:08:39.926833] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.217 [2024-11-20 17:08:39.926849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.217 [2024-11-20 17:08:39.926859] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.217 [2024-11-20 17:08:39.926874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 [2024-11-20 17:08:39.973977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.217 BaseBdev1 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.217 17:08:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 [ 00:16:16.217 { 00:16:16.217 "name": "BaseBdev1", 00:16:16.217 "aliases": [ 00:16:16.217 "308348b7-902e-4e67-a264-45ac0e80696e" 00:16:16.217 ], 00:16:16.217 "product_name": "Malloc disk", 00:16:16.217 "block_size": 512, 00:16:16.217 "num_blocks": 65536, 00:16:16.217 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:16.217 "assigned_rate_limits": { 00:16:16.217 "rw_ios_per_sec": 0, 00:16:16.217 "rw_mbytes_per_sec": 0, 00:16:16.217 "r_mbytes_per_sec": 0, 00:16:16.217 "w_mbytes_per_sec": 0 00:16:16.217 }, 00:16:16.217 "claimed": true, 00:16:16.217 "claim_type": "exclusive_write", 00:16:16.217 "zoned": false, 00:16:16.217 "supported_io_types": { 00:16:16.217 "read": true, 00:16:16.217 "write": true, 00:16:16.217 "unmap": true, 00:16:16.217 "flush": true, 00:16:16.217 "reset": true, 00:16:16.217 "nvme_admin": false, 00:16:16.217 "nvme_io": false, 00:16:16.217 "nvme_io_md": false, 00:16:16.217 "write_zeroes": true, 00:16:16.217 "zcopy": true, 00:16:16.217 "get_zone_info": false, 00:16:16.217 "zone_management": false, 00:16:16.217 "zone_append": false, 00:16:16.217 "compare": false, 00:16:16.217 "compare_and_write": false, 00:16:16.217 "abort": true, 00:16:16.217 "seek_hole": false, 00:16:16.217 "seek_data": false, 00:16:16.217 "copy": true, 00:16:16.217 "nvme_iov_md": false 00:16:16.217 }, 00:16:16.217 "memory_domains": [ 00:16:16.217 { 00:16:16.217 "dma_device_id": "system", 00:16:16.217 "dma_device_type": 1 00:16:16.217 }, 00:16:16.217 { 00:16:16.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.217 "dma_device_type": 2 00:16:16.217 } 00:16:16.217 ], 00:16:16.217 "driver_specific": {} 00:16:16.217 } 00:16:16.217 ] 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.217 "name": "Existed_Raid", 00:16:16.217 "uuid": "7b9b55ec-6736-42d0-a179-860d275868b2", 00:16:16.217 "strip_size_kb": 64, 00:16:16.217 "state": "configuring", 00:16:16.217 "raid_level": "raid5f", 00:16:16.217 "superblock": true, 00:16:16.217 "num_base_bdevs": 4, 00:16:16.217 "num_base_bdevs_discovered": 1, 00:16:16.217 "num_base_bdevs_operational": 4, 00:16:16.217 "base_bdevs_list": [ 00:16:16.217 { 00:16:16.217 "name": "BaseBdev1", 00:16:16.217 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:16.217 "is_configured": true, 00:16:16.217 "data_offset": 2048, 00:16:16.217 "data_size": 63488 00:16:16.217 }, 00:16:16.217 { 00:16:16.217 "name": "BaseBdev2", 00:16:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.217 "is_configured": false, 00:16:16.217 "data_offset": 0, 00:16:16.217 "data_size": 0 00:16:16.217 }, 00:16:16.217 { 00:16:16.217 "name": "BaseBdev3", 00:16:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.217 "is_configured": false, 00:16:16.217 "data_offset": 0, 00:16:16.217 "data_size": 0 00:16:16.217 }, 00:16:16.217 { 00:16:16.217 "name": "BaseBdev4", 00:16:16.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.217 "is_configured": false, 00:16:16.217 "data_offset": 0, 00:16:16.217 "data_size": 0 00:16:16.217 } 00:16:16.217 ] 00:16:16.217 }' 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.217 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.783 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.783 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.783 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.783 [2024-11-20 17:08:40.530222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.783 [2024-11-20 17:08:40.530268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:16.783 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 [2024-11-20 17:08:40.542289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.784 [2024-11-20 17:08:40.544683] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.784 [2024-11-20 17:08:40.544915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.784 [2024-11-20 17:08:40.544944] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.784 [2024-11-20 17:08:40.544964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.784 [2024-11-20 17:08:40.544974] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.784 [2024-11-20 17:08:40.544988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.784 "name": "Existed_Raid", 00:16:16.784 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:16.784 "strip_size_kb": 64, 00:16:16.784 "state": "configuring", 00:16:16.784 "raid_level": "raid5f", 00:16:16.784 "superblock": true, 00:16:16.784 "num_base_bdevs": 4, 00:16:16.784 "num_base_bdevs_discovered": 1, 00:16:16.784 "num_base_bdevs_operational": 4, 00:16:16.784 "base_bdevs_list": [ 00:16:16.784 { 00:16:16.784 "name": "BaseBdev1", 00:16:16.784 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:16.784 "is_configured": true, 00:16:16.784 "data_offset": 2048, 00:16:16.784 "data_size": 63488 00:16:16.784 }, 00:16:16.784 { 00:16:16.784 "name": "BaseBdev2", 00:16:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.784 "is_configured": false, 00:16:16.784 "data_offset": 0, 00:16:16.784 "data_size": 0 00:16:16.784 }, 00:16:16.784 { 00:16:16.784 "name": "BaseBdev3", 00:16:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.784 "is_configured": false, 00:16:16.784 "data_offset": 0, 00:16:16.784 "data_size": 0 00:16:16.784 }, 00:16:16.784 { 00:16:16.784 "name": "BaseBdev4", 00:16:16.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.784 "is_configured": false, 00:16:16.784 "data_offset": 0, 00:16:16.784 "data_size": 0 00:16:16.784 } 00:16:16.784 ] 00:16:16.784 }' 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.784 17:08:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 [2024-11-20 17:08:41.124252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.351 BaseBdev2 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 [ 00:16:17.351 { 00:16:17.351 "name": "BaseBdev2", 00:16:17.351 "aliases": [ 00:16:17.351 "7189832d-011e-415d-b7d1-0aa34c912f77" 00:16:17.351 ], 00:16:17.351 "product_name": "Malloc disk", 00:16:17.351 "block_size": 512, 00:16:17.351 "num_blocks": 65536, 00:16:17.351 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:17.351 "assigned_rate_limits": { 00:16:17.351 "rw_ios_per_sec": 0, 00:16:17.351 "rw_mbytes_per_sec": 0, 00:16:17.351 "r_mbytes_per_sec": 0, 00:16:17.351 "w_mbytes_per_sec": 0 00:16:17.351 }, 00:16:17.351 "claimed": true, 00:16:17.351 "claim_type": "exclusive_write", 00:16:17.351 "zoned": false, 00:16:17.351 "supported_io_types": { 00:16:17.351 "read": true, 00:16:17.351 "write": true, 00:16:17.351 "unmap": true, 00:16:17.351 "flush": true, 00:16:17.351 "reset": true, 00:16:17.351 "nvme_admin": false, 00:16:17.351 "nvme_io": false, 00:16:17.351 "nvme_io_md": false, 00:16:17.351 "write_zeroes": true, 00:16:17.351 "zcopy": true, 00:16:17.351 "get_zone_info": false, 00:16:17.351 "zone_management": false, 00:16:17.351 "zone_append": false, 00:16:17.351 "compare": false, 00:16:17.351 "compare_and_write": false, 00:16:17.351 "abort": true, 00:16:17.351 "seek_hole": false, 00:16:17.351 "seek_data": false, 00:16:17.351 "copy": true, 00:16:17.351 "nvme_iov_md": false 00:16:17.351 }, 00:16:17.351 "memory_domains": [ 00:16:17.351 { 00:16:17.351 "dma_device_id": "system", 00:16:17.351 "dma_device_type": 1 00:16:17.351 }, 00:16:17.351 { 00:16:17.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.351 "dma_device_type": 2 00:16:17.351 } 00:16:17.351 ], 00:16:17.351 "driver_specific": {} 00:16:17.351 } 00:16:17.351 ] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.351 "name": "Existed_Raid", 00:16:17.351 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:17.351 "strip_size_kb": 64, 00:16:17.351 "state": "configuring", 00:16:17.351 "raid_level": "raid5f", 00:16:17.351 "superblock": true, 00:16:17.351 "num_base_bdevs": 4, 00:16:17.351 "num_base_bdevs_discovered": 2, 00:16:17.351 "num_base_bdevs_operational": 4, 00:16:17.351 "base_bdevs_list": [ 00:16:17.351 { 00:16:17.351 "name": "BaseBdev1", 00:16:17.351 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:17.351 "is_configured": true, 00:16:17.351 "data_offset": 2048, 00:16:17.351 "data_size": 63488 00:16:17.351 }, 00:16:17.351 { 00:16:17.351 "name": "BaseBdev2", 00:16:17.351 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:17.351 "is_configured": true, 00:16:17.351 "data_offset": 2048, 00:16:17.351 "data_size": 63488 00:16:17.351 }, 00:16:17.351 { 00:16:17.351 "name": "BaseBdev3", 00:16:17.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.351 "is_configured": false, 00:16:17.351 "data_offset": 0, 00:16:17.351 "data_size": 0 00:16:17.351 }, 00:16:17.351 { 00:16:17.351 "name": "BaseBdev4", 00:16:17.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.351 "is_configured": false, 00:16:17.351 "data_offset": 0, 00:16:17.351 "data_size": 0 00:16:17.351 } 00:16:17.351 ] 00:16:17.351 }' 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.351 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 [2024-11-20 17:08:41.737028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.918 BaseBdev3 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 [ 00:16:17.918 { 00:16:17.918 "name": "BaseBdev3", 00:16:17.918 "aliases": [ 00:16:17.918 "57b08cbd-9971-4f34-9676-90fe0e23432f" 00:16:17.918 ], 00:16:17.918 "product_name": "Malloc disk", 00:16:17.918 "block_size": 512, 00:16:17.918 "num_blocks": 65536, 00:16:17.918 "uuid": "57b08cbd-9971-4f34-9676-90fe0e23432f", 00:16:17.918 "assigned_rate_limits": { 00:16:17.918 "rw_ios_per_sec": 0, 00:16:17.918 "rw_mbytes_per_sec": 0, 00:16:17.918 "r_mbytes_per_sec": 0, 00:16:17.918 "w_mbytes_per_sec": 0 00:16:17.918 }, 00:16:17.918 "claimed": true, 00:16:17.918 "claim_type": "exclusive_write", 00:16:17.918 "zoned": false, 00:16:17.918 "supported_io_types": { 00:16:17.918 "read": true, 00:16:17.918 "write": true, 00:16:17.918 "unmap": true, 00:16:17.918 "flush": true, 00:16:17.918 "reset": true, 00:16:17.918 "nvme_admin": false, 00:16:17.918 "nvme_io": false, 00:16:17.918 "nvme_io_md": false, 00:16:17.918 "write_zeroes": true, 00:16:17.918 "zcopy": true, 00:16:17.918 "get_zone_info": false, 00:16:17.919 "zone_management": false, 00:16:17.919 "zone_append": false, 00:16:17.919 "compare": false, 00:16:17.919 "compare_and_write": false, 00:16:17.919 "abort": true, 00:16:17.919 "seek_hole": false, 00:16:17.919 "seek_data": false, 00:16:17.919 "copy": true, 00:16:17.919 "nvme_iov_md": false 00:16:17.919 }, 00:16:17.919 "memory_domains": [ 00:16:17.919 { 00:16:17.919 "dma_device_id": "system", 00:16:17.919 "dma_device_type": 1 00:16:17.919 }, 00:16:17.919 { 00:16:17.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.919 "dma_device_type": 2 00:16:17.919 } 00:16:17.919 ], 00:16:17.919 "driver_specific": {} 00:16:17.919 } 00:16:17.919 ] 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.919 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.177 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.177 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.177 "name": "Existed_Raid", 00:16:18.177 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:18.177 "strip_size_kb": 64, 00:16:18.177 "state": "configuring", 00:16:18.177 "raid_level": "raid5f", 00:16:18.177 "superblock": true, 00:16:18.177 "num_base_bdevs": 4, 00:16:18.177 "num_base_bdevs_discovered": 3, 00:16:18.177 "num_base_bdevs_operational": 4, 00:16:18.177 "base_bdevs_list": [ 00:16:18.177 { 00:16:18.177 "name": "BaseBdev1", 00:16:18.177 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:18.177 "is_configured": true, 00:16:18.177 "data_offset": 2048, 00:16:18.177 "data_size": 63488 00:16:18.177 }, 00:16:18.177 { 00:16:18.177 "name": "BaseBdev2", 00:16:18.177 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:18.177 "is_configured": true, 00:16:18.177 "data_offset": 2048, 00:16:18.177 "data_size": 63488 00:16:18.177 }, 00:16:18.177 { 00:16:18.177 "name": "BaseBdev3", 00:16:18.177 "uuid": "57b08cbd-9971-4f34-9676-90fe0e23432f", 00:16:18.177 "is_configured": true, 00:16:18.177 "data_offset": 2048, 00:16:18.177 "data_size": 63488 00:16:18.177 }, 00:16:18.177 { 00:16:18.177 "name": "BaseBdev4", 00:16:18.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.177 "is_configured": false, 00:16:18.177 "data_offset": 0, 00:16:18.177 "data_size": 0 00:16:18.177 } 00:16:18.177 ] 00:16:18.177 }' 00:16:18.177 17:08:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.177 17:08:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.482 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:18.482 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.482 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 [2024-11-20 17:08:42.348724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.758 [2024-11-20 17:08:42.349135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.758 [2024-11-20 17:08:42.349156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.758 [2024-11-20 17:08:42.349476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.758 BaseBdev4 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 [2024-11-20 17:08:42.356520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.758 [2024-11-20 17:08:42.356689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:18.758 [2024-11-20 17:08:42.357030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 [ 00:16:18.758 { 00:16:18.758 "name": "BaseBdev4", 00:16:18.758 "aliases": [ 00:16:18.758 "31fab759-dbbb-4994-8862-70c718399e70" 00:16:18.758 ], 00:16:18.758 "product_name": "Malloc disk", 00:16:18.758 "block_size": 512, 00:16:18.758 "num_blocks": 65536, 00:16:18.758 "uuid": "31fab759-dbbb-4994-8862-70c718399e70", 00:16:18.758 "assigned_rate_limits": { 00:16:18.758 "rw_ios_per_sec": 0, 00:16:18.758 "rw_mbytes_per_sec": 0, 00:16:18.758 "r_mbytes_per_sec": 0, 00:16:18.758 "w_mbytes_per_sec": 0 00:16:18.758 }, 00:16:18.758 "claimed": true, 00:16:18.758 "claim_type": "exclusive_write", 00:16:18.758 "zoned": false, 00:16:18.758 "supported_io_types": { 00:16:18.758 "read": true, 00:16:18.758 "write": true, 00:16:18.758 "unmap": true, 00:16:18.758 "flush": true, 00:16:18.758 "reset": true, 00:16:18.758 "nvme_admin": false, 00:16:18.758 "nvme_io": false, 00:16:18.758 "nvme_io_md": false, 00:16:18.758 "write_zeroes": true, 00:16:18.758 "zcopy": true, 00:16:18.758 "get_zone_info": false, 00:16:18.758 "zone_management": false, 00:16:18.758 "zone_append": false, 00:16:18.758 "compare": false, 00:16:18.758 "compare_and_write": false, 00:16:18.758 "abort": true, 00:16:18.758 "seek_hole": false, 00:16:18.758 "seek_data": false, 00:16:18.758 "copy": true, 00:16:18.758 "nvme_iov_md": false 00:16:18.758 }, 00:16:18.758 "memory_domains": [ 00:16:18.758 { 00:16:18.758 "dma_device_id": "system", 00:16:18.758 "dma_device_type": 1 00:16:18.758 }, 00:16:18.758 { 00:16:18.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.758 "dma_device_type": 2 00:16:18.758 } 00:16:18.758 ], 00:16:18.758 "driver_specific": {} 00:16:18.758 } 00:16:18.758 ] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.758 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.758 "name": "Existed_Raid", 00:16:18.758 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:18.758 "strip_size_kb": 64, 00:16:18.758 "state": "online", 00:16:18.758 "raid_level": "raid5f", 00:16:18.758 "superblock": true, 00:16:18.758 "num_base_bdevs": 4, 00:16:18.758 "num_base_bdevs_discovered": 4, 00:16:18.758 "num_base_bdevs_operational": 4, 00:16:18.758 "base_bdevs_list": [ 00:16:18.758 { 00:16:18.758 "name": "BaseBdev1", 00:16:18.758 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:18.758 "is_configured": true, 00:16:18.758 "data_offset": 2048, 00:16:18.758 "data_size": 63488 00:16:18.758 }, 00:16:18.758 { 00:16:18.758 "name": "BaseBdev2", 00:16:18.758 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:18.759 "is_configured": true, 00:16:18.759 "data_offset": 2048, 00:16:18.759 "data_size": 63488 00:16:18.759 }, 00:16:18.759 { 00:16:18.759 "name": "BaseBdev3", 00:16:18.759 "uuid": "57b08cbd-9971-4f34-9676-90fe0e23432f", 00:16:18.759 "is_configured": true, 00:16:18.759 "data_offset": 2048, 00:16:18.759 "data_size": 63488 00:16:18.759 }, 00:16:18.759 { 00:16:18.759 "name": "BaseBdev4", 00:16:18.759 "uuid": "31fab759-dbbb-4994-8862-70c718399e70", 00:16:18.759 "is_configured": true, 00:16:18.759 "data_offset": 2048, 00:16:18.759 "data_size": 63488 00:16:18.759 } 00:16:18.759 ] 00:16:18.759 }' 00:16:18.759 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.759 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.325 [2024-11-20 17:08:42.924748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.325 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.325 "name": "Existed_Raid", 00:16:19.325 "aliases": [ 00:16:19.325 "e3c3303a-b01f-46b0-9026-c271e9663898" 00:16:19.325 ], 00:16:19.325 "product_name": "Raid Volume", 00:16:19.325 "block_size": 512, 00:16:19.325 "num_blocks": 190464, 00:16:19.325 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:19.325 "assigned_rate_limits": { 00:16:19.325 "rw_ios_per_sec": 0, 00:16:19.325 "rw_mbytes_per_sec": 0, 00:16:19.325 "r_mbytes_per_sec": 0, 00:16:19.325 "w_mbytes_per_sec": 0 00:16:19.325 }, 00:16:19.325 "claimed": false, 00:16:19.325 "zoned": false, 00:16:19.325 "supported_io_types": { 00:16:19.325 "read": true, 00:16:19.325 "write": true, 00:16:19.325 "unmap": false, 00:16:19.325 "flush": false, 00:16:19.325 "reset": true, 00:16:19.325 "nvme_admin": false, 00:16:19.325 "nvme_io": false, 00:16:19.326 "nvme_io_md": false, 00:16:19.326 "write_zeroes": true, 00:16:19.326 "zcopy": false, 00:16:19.326 "get_zone_info": false, 00:16:19.326 "zone_management": false, 00:16:19.326 "zone_append": false, 00:16:19.326 "compare": false, 00:16:19.326 "compare_and_write": false, 00:16:19.326 "abort": false, 00:16:19.326 "seek_hole": false, 00:16:19.326 "seek_data": false, 00:16:19.326 "copy": false, 00:16:19.326 "nvme_iov_md": false 00:16:19.326 }, 00:16:19.326 "driver_specific": { 00:16:19.326 "raid": { 00:16:19.326 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:19.326 "strip_size_kb": 64, 00:16:19.326 "state": "online", 00:16:19.326 "raid_level": "raid5f", 00:16:19.326 "superblock": true, 00:16:19.326 "num_base_bdevs": 4, 00:16:19.326 "num_base_bdevs_discovered": 4, 00:16:19.326 "num_base_bdevs_operational": 4, 00:16:19.326 "base_bdevs_list": [ 00:16:19.326 { 00:16:19.326 "name": "BaseBdev1", 00:16:19.326 "uuid": "308348b7-902e-4e67-a264-45ac0e80696e", 00:16:19.326 "is_configured": true, 00:16:19.326 "data_offset": 2048, 00:16:19.326 "data_size": 63488 00:16:19.326 }, 00:16:19.326 { 00:16:19.326 "name": "BaseBdev2", 00:16:19.326 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:19.326 "is_configured": true, 00:16:19.326 "data_offset": 2048, 00:16:19.326 "data_size": 63488 00:16:19.326 }, 00:16:19.326 { 00:16:19.326 "name": "BaseBdev3", 00:16:19.326 "uuid": "57b08cbd-9971-4f34-9676-90fe0e23432f", 00:16:19.326 "is_configured": true, 00:16:19.326 "data_offset": 2048, 00:16:19.326 "data_size": 63488 00:16:19.326 }, 00:16:19.326 { 00:16:19.326 "name": "BaseBdev4", 00:16:19.326 "uuid": "31fab759-dbbb-4994-8862-70c718399e70", 00:16:19.326 "is_configured": true, 00:16:19.326 "data_offset": 2048, 00:16:19.326 "data_size": 63488 00:16:19.326 } 00:16:19.326 ] 00:16:19.326 } 00:16:19.326 } 00:16:19.326 }' 00:16:19.326 17:08:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:19.326 BaseBdev2 00:16:19.326 BaseBdev3 00:16:19.326 BaseBdev4' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.326 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 [2024-11-20 17:08:43.296700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.585 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.585 "name": "Existed_Raid", 00:16:19.585 "uuid": "e3c3303a-b01f-46b0-9026-c271e9663898", 00:16:19.585 "strip_size_kb": 64, 00:16:19.585 "state": "online", 00:16:19.585 "raid_level": "raid5f", 00:16:19.585 "superblock": true, 00:16:19.585 "num_base_bdevs": 4, 00:16:19.585 "num_base_bdevs_discovered": 3, 00:16:19.585 "num_base_bdevs_operational": 3, 00:16:19.585 "base_bdevs_list": [ 00:16:19.585 { 00:16:19.585 "name": null, 00:16:19.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.585 "is_configured": false, 00:16:19.585 "data_offset": 0, 00:16:19.585 "data_size": 63488 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "name": "BaseBdev2", 00:16:19.585 "uuid": "7189832d-011e-415d-b7d1-0aa34c912f77", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "name": "BaseBdev3", 00:16:19.585 "uuid": "57b08cbd-9971-4f34-9676-90fe0e23432f", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.585 }, 00:16:19.585 { 00:16:19.585 "name": "BaseBdev4", 00:16:19.585 "uuid": "31fab759-dbbb-4994-8862-70c718399e70", 00:16:19.585 "is_configured": true, 00:16:19.585 "data_offset": 2048, 00:16:19.585 "data_size": 63488 00:16:19.586 } 00:16:19.586 ] 00:16:19.586 }' 00:16:19.586 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.586 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.153 17:08:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.153 [2024-11-20 17:08:43.959612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.153 [2024-11-20 17:08:43.959939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.412 [2024-11-20 17:08:44.054256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 [2024-11-20 17:08:44.114377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.412 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.412 [2024-11-20 17:08:44.269910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:20.412 [2024-11-20 17:08:44.270028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 BaseBdev2 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 [ 00:16:20.671 { 00:16:20.671 "name": "BaseBdev2", 00:16:20.671 "aliases": [ 00:16:20.671 "c7162282-a64a-4beb-b8de-27d76a79a978" 00:16:20.671 ], 00:16:20.671 "product_name": "Malloc disk", 00:16:20.671 "block_size": 512, 00:16:20.671 "num_blocks": 65536, 00:16:20.671 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:20.671 "assigned_rate_limits": { 00:16:20.671 "rw_ios_per_sec": 0, 00:16:20.671 "rw_mbytes_per_sec": 0, 00:16:20.671 "r_mbytes_per_sec": 0, 00:16:20.671 "w_mbytes_per_sec": 0 00:16:20.671 }, 00:16:20.671 "claimed": false, 00:16:20.671 "zoned": false, 00:16:20.671 "supported_io_types": { 00:16:20.671 "read": true, 00:16:20.671 "write": true, 00:16:20.671 "unmap": true, 00:16:20.671 "flush": true, 00:16:20.671 "reset": true, 00:16:20.671 "nvme_admin": false, 00:16:20.671 "nvme_io": false, 00:16:20.671 "nvme_io_md": false, 00:16:20.671 "write_zeroes": true, 00:16:20.671 "zcopy": true, 00:16:20.671 "get_zone_info": false, 00:16:20.671 "zone_management": false, 00:16:20.671 "zone_append": false, 00:16:20.671 "compare": false, 00:16:20.671 "compare_and_write": false, 00:16:20.671 "abort": true, 00:16:20.671 "seek_hole": false, 00:16:20.671 "seek_data": false, 00:16:20.671 "copy": true, 00:16:20.671 "nvme_iov_md": false 00:16:20.671 }, 00:16:20.671 "memory_domains": [ 00:16:20.671 { 00:16:20.671 "dma_device_id": "system", 00:16:20.671 "dma_device_type": 1 00:16:20.671 }, 00:16:20.671 { 00:16:20.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.671 "dma_device_type": 2 00:16:20.671 } 00:16:20.671 ], 00:16:20.671 "driver_specific": {} 00:16:20.671 } 00:16:20.671 ] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.671 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.930 BaseBdev3 00:16:20.930 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.930 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:20.930 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:20.930 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 [ 00:16:20.931 { 00:16:20.931 "name": "BaseBdev3", 00:16:20.931 "aliases": [ 00:16:20.931 "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0" 00:16:20.931 ], 00:16:20.931 "product_name": "Malloc disk", 00:16:20.931 "block_size": 512, 00:16:20.931 "num_blocks": 65536, 00:16:20.931 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:20.931 "assigned_rate_limits": { 00:16:20.931 "rw_ios_per_sec": 0, 00:16:20.931 "rw_mbytes_per_sec": 0, 00:16:20.931 "r_mbytes_per_sec": 0, 00:16:20.931 "w_mbytes_per_sec": 0 00:16:20.931 }, 00:16:20.931 "claimed": false, 00:16:20.931 "zoned": false, 00:16:20.931 "supported_io_types": { 00:16:20.931 "read": true, 00:16:20.931 "write": true, 00:16:20.931 "unmap": true, 00:16:20.931 "flush": true, 00:16:20.931 "reset": true, 00:16:20.931 "nvme_admin": false, 00:16:20.931 "nvme_io": false, 00:16:20.931 "nvme_io_md": false, 00:16:20.931 "write_zeroes": true, 00:16:20.931 "zcopy": true, 00:16:20.931 "get_zone_info": false, 00:16:20.931 "zone_management": false, 00:16:20.931 "zone_append": false, 00:16:20.931 "compare": false, 00:16:20.931 "compare_and_write": false, 00:16:20.931 "abort": true, 00:16:20.931 "seek_hole": false, 00:16:20.931 "seek_data": false, 00:16:20.931 "copy": true, 00:16:20.931 "nvme_iov_md": false 00:16:20.931 }, 00:16:20.931 "memory_domains": [ 00:16:20.931 { 00:16:20.931 "dma_device_id": "system", 00:16:20.931 "dma_device_type": 1 00:16:20.931 }, 00:16:20.931 { 00:16:20.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.931 "dma_device_type": 2 00:16:20.931 } 00:16:20.931 ], 00:16:20.931 "driver_specific": {} 00:16:20.931 } 00:16:20.931 ] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 BaseBdev4 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 [ 00:16:20.931 { 00:16:20.931 "name": "BaseBdev4", 00:16:20.931 "aliases": [ 00:16:20.931 "78f3e197-e8fe-409d-b7c3-886d23a2d94f" 00:16:20.931 ], 00:16:20.931 "product_name": "Malloc disk", 00:16:20.931 "block_size": 512, 00:16:20.931 "num_blocks": 65536, 00:16:20.931 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:20.931 "assigned_rate_limits": { 00:16:20.931 "rw_ios_per_sec": 0, 00:16:20.931 "rw_mbytes_per_sec": 0, 00:16:20.931 "r_mbytes_per_sec": 0, 00:16:20.931 "w_mbytes_per_sec": 0 00:16:20.931 }, 00:16:20.931 "claimed": false, 00:16:20.931 "zoned": false, 00:16:20.931 "supported_io_types": { 00:16:20.931 "read": true, 00:16:20.931 "write": true, 00:16:20.931 "unmap": true, 00:16:20.931 "flush": true, 00:16:20.931 "reset": true, 00:16:20.931 "nvme_admin": false, 00:16:20.931 "nvme_io": false, 00:16:20.931 "nvme_io_md": false, 00:16:20.931 "write_zeroes": true, 00:16:20.931 "zcopy": true, 00:16:20.931 "get_zone_info": false, 00:16:20.931 "zone_management": false, 00:16:20.931 "zone_append": false, 00:16:20.931 "compare": false, 00:16:20.931 "compare_and_write": false, 00:16:20.931 "abort": true, 00:16:20.931 "seek_hole": false, 00:16:20.931 "seek_data": false, 00:16:20.931 "copy": true, 00:16:20.931 "nvme_iov_md": false 00:16:20.931 }, 00:16:20.931 "memory_domains": [ 00:16:20.931 { 00:16:20.931 "dma_device_id": "system", 00:16:20.931 "dma_device_type": 1 00:16:20.931 }, 00:16:20.931 { 00:16:20.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.931 "dma_device_type": 2 00:16:20.931 } 00:16:20.931 ], 00:16:20.931 "driver_specific": {} 00:16:20.931 } 00:16:20.931 ] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.931 [2024-11-20 17:08:44.672805] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.931 [2024-11-20 17:08:44.673168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.931 [2024-11-20 17:08:44.673230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.931 [2024-11-20 17:08:44.676411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.931 [2024-11-20 17:08:44.676627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.931 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.932 "name": "Existed_Raid", 00:16:20.932 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:20.932 "strip_size_kb": 64, 00:16:20.932 "state": "configuring", 00:16:20.932 "raid_level": "raid5f", 00:16:20.932 "superblock": true, 00:16:20.932 "num_base_bdevs": 4, 00:16:20.932 "num_base_bdevs_discovered": 3, 00:16:20.932 "num_base_bdevs_operational": 4, 00:16:20.932 "base_bdevs_list": [ 00:16:20.932 { 00:16:20.932 "name": "BaseBdev1", 00:16:20.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.932 "is_configured": false, 00:16:20.932 "data_offset": 0, 00:16:20.932 "data_size": 0 00:16:20.932 }, 00:16:20.932 { 00:16:20.932 "name": "BaseBdev2", 00:16:20.932 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:20.932 "is_configured": true, 00:16:20.932 "data_offset": 2048, 00:16:20.932 "data_size": 63488 00:16:20.932 }, 00:16:20.932 { 00:16:20.932 "name": "BaseBdev3", 00:16:20.932 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:20.932 "is_configured": true, 00:16:20.932 "data_offset": 2048, 00:16:20.932 "data_size": 63488 00:16:20.932 }, 00:16:20.932 { 00:16:20.932 "name": "BaseBdev4", 00:16:20.932 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:20.932 "is_configured": true, 00:16:20.932 "data_offset": 2048, 00:16:20.932 "data_size": 63488 00:16:20.932 } 00:16:20.932 ] 00:16:20.932 }' 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.932 17:08:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.499 [2024-11-20 17:08:45.205288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.499 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.499 "name": "Existed_Raid", 00:16:21.499 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:21.499 "strip_size_kb": 64, 00:16:21.499 "state": "configuring", 00:16:21.499 "raid_level": "raid5f", 00:16:21.499 "superblock": true, 00:16:21.499 "num_base_bdevs": 4, 00:16:21.499 "num_base_bdevs_discovered": 2, 00:16:21.499 "num_base_bdevs_operational": 4, 00:16:21.499 "base_bdevs_list": [ 00:16:21.499 { 00:16:21.499 "name": "BaseBdev1", 00:16:21.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.499 "is_configured": false, 00:16:21.499 "data_offset": 0, 00:16:21.499 "data_size": 0 00:16:21.499 }, 00:16:21.499 { 00:16:21.499 "name": null, 00:16:21.499 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:21.499 "is_configured": false, 00:16:21.499 "data_offset": 0, 00:16:21.499 "data_size": 63488 00:16:21.499 }, 00:16:21.499 { 00:16:21.499 "name": "BaseBdev3", 00:16:21.499 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:21.500 "is_configured": true, 00:16:21.500 "data_offset": 2048, 00:16:21.500 "data_size": 63488 00:16:21.500 }, 00:16:21.500 { 00:16:21.500 "name": "BaseBdev4", 00:16:21.500 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:21.500 "is_configured": true, 00:16:21.500 "data_offset": 2048, 00:16:21.500 "data_size": 63488 00:16:21.500 } 00:16:21.500 ] 00:16:21.500 }' 00:16:21.500 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.500 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 [2024-11-20 17:08:45.824623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.067 BaseBdev1 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 [ 00:16:22.067 { 00:16:22.067 "name": "BaseBdev1", 00:16:22.067 "aliases": [ 00:16:22.067 "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca" 00:16:22.067 ], 00:16:22.067 "product_name": "Malloc disk", 00:16:22.067 "block_size": 512, 00:16:22.067 "num_blocks": 65536, 00:16:22.067 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:22.067 "assigned_rate_limits": { 00:16:22.067 "rw_ios_per_sec": 0, 00:16:22.067 "rw_mbytes_per_sec": 0, 00:16:22.067 "r_mbytes_per_sec": 0, 00:16:22.067 "w_mbytes_per_sec": 0 00:16:22.067 }, 00:16:22.067 "claimed": true, 00:16:22.067 "claim_type": "exclusive_write", 00:16:22.067 "zoned": false, 00:16:22.067 "supported_io_types": { 00:16:22.067 "read": true, 00:16:22.067 "write": true, 00:16:22.067 "unmap": true, 00:16:22.067 "flush": true, 00:16:22.067 "reset": true, 00:16:22.067 "nvme_admin": false, 00:16:22.067 "nvme_io": false, 00:16:22.067 "nvme_io_md": false, 00:16:22.067 "write_zeroes": true, 00:16:22.067 "zcopy": true, 00:16:22.067 "get_zone_info": false, 00:16:22.067 "zone_management": false, 00:16:22.067 "zone_append": false, 00:16:22.067 "compare": false, 00:16:22.067 "compare_and_write": false, 00:16:22.067 "abort": true, 00:16:22.067 "seek_hole": false, 00:16:22.067 "seek_data": false, 00:16:22.067 "copy": true, 00:16:22.067 "nvme_iov_md": false 00:16:22.067 }, 00:16:22.067 "memory_domains": [ 00:16:22.067 { 00:16:22.067 "dma_device_id": "system", 00:16:22.067 "dma_device_type": 1 00:16:22.067 }, 00:16:22.067 { 00:16:22.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.067 "dma_device_type": 2 00:16:22.067 } 00:16:22.067 ], 00:16:22.067 "driver_specific": {} 00:16:22.067 } 00:16:22.067 ] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.067 "name": "Existed_Raid", 00:16:22.067 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:22.067 "strip_size_kb": 64, 00:16:22.067 "state": "configuring", 00:16:22.067 "raid_level": "raid5f", 00:16:22.067 "superblock": true, 00:16:22.067 "num_base_bdevs": 4, 00:16:22.067 "num_base_bdevs_discovered": 3, 00:16:22.067 "num_base_bdevs_operational": 4, 00:16:22.067 "base_bdevs_list": [ 00:16:22.067 { 00:16:22.067 "name": "BaseBdev1", 00:16:22.067 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:22.067 "is_configured": true, 00:16:22.067 "data_offset": 2048, 00:16:22.067 "data_size": 63488 00:16:22.067 }, 00:16:22.067 { 00:16:22.067 "name": null, 00:16:22.067 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:22.067 "is_configured": false, 00:16:22.067 "data_offset": 0, 00:16:22.067 "data_size": 63488 00:16:22.067 }, 00:16:22.067 { 00:16:22.067 "name": "BaseBdev3", 00:16:22.067 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:22.067 "is_configured": true, 00:16:22.067 "data_offset": 2048, 00:16:22.067 "data_size": 63488 00:16:22.067 }, 00:16:22.067 { 00:16:22.067 "name": "BaseBdev4", 00:16:22.067 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:22.067 "is_configured": true, 00:16:22.067 "data_offset": 2048, 00:16:22.067 "data_size": 63488 00:16:22.067 } 00:16:22.067 ] 00:16:22.067 }' 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.067 17:08:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 [2024-11-20 17:08:46.429046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.635 "name": "Existed_Raid", 00:16:22.635 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:22.635 "strip_size_kb": 64, 00:16:22.635 "state": "configuring", 00:16:22.635 "raid_level": "raid5f", 00:16:22.635 "superblock": true, 00:16:22.635 "num_base_bdevs": 4, 00:16:22.635 "num_base_bdevs_discovered": 2, 00:16:22.635 "num_base_bdevs_operational": 4, 00:16:22.635 "base_bdevs_list": [ 00:16:22.635 { 00:16:22.635 "name": "BaseBdev1", 00:16:22.635 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:22.635 "is_configured": true, 00:16:22.635 "data_offset": 2048, 00:16:22.635 "data_size": 63488 00:16:22.635 }, 00:16:22.635 { 00:16:22.635 "name": null, 00:16:22.635 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:22.635 "is_configured": false, 00:16:22.635 "data_offset": 0, 00:16:22.635 "data_size": 63488 00:16:22.635 }, 00:16:22.635 { 00:16:22.635 "name": null, 00:16:22.635 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:22.635 "is_configured": false, 00:16:22.635 "data_offset": 0, 00:16:22.635 "data_size": 63488 00:16:22.635 }, 00:16:22.635 { 00:16:22.635 "name": "BaseBdev4", 00:16:22.635 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:22.635 "is_configured": true, 00:16:22.635 "data_offset": 2048, 00:16:22.635 "data_size": 63488 00:16:22.635 } 00:16:22.635 ] 00:16:22.635 }' 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.635 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.203 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.203 17:08:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.203 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.203 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.203 17:08:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.203 [2024-11-20 17:08:47.013074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.203 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.462 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.462 "name": "Existed_Raid", 00:16:23.462 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:23.462 "strip_size_kb": 64, 00:16:23.462 "state": "configuring", 00:16:23.462 "raid_level": "raid5f", 00:16:23.462 "superblock": true, 00:16:23.462 "num_base_bdevs": 4, 00:16:23.462 "num_base_bdevs_discovered": 3, 00:16:23.462 "num_base_bdevs_operational": 4, 00:16:23.462 "base_bdevs_list": [ 00:16:23.462 { 00:16:23.462 "name": "BaseBdev1", 00:16:23.462 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:23.462 "is_configured": true, 00:16:23.462 "data_offset": 2048, 00:16:23.462 "data_size": 63488 00:16:23.462 }, 00:16:23.462 { 00:16:23.462 "name": null, 00:16:23.462 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:23.462 "is_configured": false, 00:16:23.462 "data_offset": 0, 00:16:23.462 "data_size": 63488 00:16:23.462 }, 00:16:23.462 { 00:16:23.462 "name": "BaseBdev3", 00:16:23.462 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:23.462 "is_configured": true, 00:16:23.462 "data_offset": 2048, 00:16:23.462 "data_size": 63488 00:16:23.462 }, 00:16:23.462 { 00:16:23.462 "name": "BaseBdev4", 00:16:23.462 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:23.462 "is_configured": true, 00:16:23.462 "data_offset": 2048, 00:16:23.462 "data_size": 63488 00:16:23.462 } 00:16:23.462 ] 00:16:23.462 }' 00:16:23.462 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.462 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.720 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.720 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.721 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.721 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.721 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.979 [2024-11-20 17:08:47.601303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.979 "name": "Existed_Raid", 00:16:23.979 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:23.979 "strip_size_kb": 64, 00:16:23.979 "state": "configuring", 00:16:23.979 "raid_level": "raid5f", 00:16:23.979 "superblock": true, 00:16:23.979 "num_base_bdevs": 4, 00:16:23.979 "num_base_bdevs_discovered": 2, 00:16:23.979 "num_base_bdevs_operational": 4, 00:16:23.979 "base_bdevs_list": [ 00:16:23.979 { 00:16:23.979 "name": null, 00:16:23.979 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:23.979 "is_configured": false, 00:16:23.979 "data_offset": 0, 00:16:23.979 "data_size": 63488 00:16:23.979 }, 00:16:23.979 { 00:16:23.979 "name": null, 00:16:23.979 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:23.979 "is_configured": false, 00:16:23.979 "data_offset": 0, 00:16:23.979 "data_size": 63488 00:16:23.979 }, 00:16:23.979 { 00:16:23.979 "name": "BaseBdev3", 00:16:23.979 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:23.979 "is_configured": true, 00:16:23.979 "data_offset": 2048, 00:16:23.979 "data_size": 63488 00:16:23.979 }, 00:16:23.979 { 00:16:23.979 "name": "BaseBdev4", 00:16:23.979 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:23.979 "is_configured": true, 00:16:23.979 "data_offset": 2048, 00:16:23.979 "data_size": 63488 00:16:23.979 } 00:16:23.979 ] 00:16:23.979 }' 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.979 17:08:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.546 [2024-11-20 17:08:48.264485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.546 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.547 "name": "Existed_Raid", 00:16:24.547 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:24.547 "strip_size_kb": 64, 00:16:24.547 "state": "configuring", 00:16:24.547 "raid_level": "raid5f", 00:16:24.547 "superblock": true, 00:16:24.547 "num_base_bdevs": 4, 00:16:24.547 "num_base_bdevs_discovered": 3, 00:16:24.547 "num_base_bdevs_operational": 4, 00:16:24.547 "base_bdevs_list": [ 00:16:24.547 { 00:16:24.547 "name": null, 00:16:24.547 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:24.547 "is_configured": false, 00:16:24.547 "data_offset": 0, 00:16:24.547 "data_size": 63488 00:16:24.547 }, 00:16:24.547 { 00:16:24.547 "name": "BaseBdev2", 00:16:24.547 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:24.547 "is_configured": true, 00:16:24.547 "data_offset": 2048, 00:16:24.547 "data_size": 63488 00:16:24.547 }, 00:16:24.547 { 00:16:24.547 "name": "BaseBdev3", 00:16:24.547 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:24.547 "is_configured": true, 00:16:24.547 "data_offset": 2048, 00:16:24.547 "data_size": 63488 00:16:24.547 }, 00:16:24.547 { 00:16:24.547 "name": "BaseBdev4", 00:16:24.547 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:24.547 "is_configured": true, 00:16:24.547 "data_offset": 2048, 00:16:24.547 "data_size": 63488 00:16:24.547 } 00:16:24.547 ] 00:16:24.547 }' 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.547 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aef8f6a6-0d1c-4a2e-b964-56d2222be6ca 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 [2024-11-20 17:08:48.936865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:25.112 [2024-11-20 17:08:48.937169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:25.112 [2024-11-20 17:08:48.937188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.112 NewBaseBdev 00:16:25.112 [2024-11-20 17:08:48.937485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 [2024-11-20 17:08:48.942862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:25.112 [2024-11-20 17:08:48.942895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:25.112 [2024-11-20 17:08:48.943178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.112 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 [ 00:16:25.112 { 00:16:25.112 "name": "NewBaseBdev", 00:16:25.112 "aliases": [ 00:16:25.112 "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca" 00:16:25.112 ], 00:16:25.112 "product_name": "Malloc disk", 00:16:25.112 "block_size": 512, 00:16:25.112 "num_blocks": 65536, 00:16:25.112 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:25.112 "assigned_rate_limits": { 00:16:25.112 "rw_ios_per_sec": 0, 00:16:25.112 "rw_mbytes_per_sec": 0, 00:16:25.112 "r_mbytes_per_sec": 0, 00:16:25.112 "w_mbytes_per_sec": 0 00:16:25.112 }, 00:16:25.112 "claimed": true, 00:16:25.112 "claim_type": "exclusive_write", 00:16:25.112 "zoned": false, 00:16:25.112 "supported_io_types": { 00:16:25.112 "read": true, 00:16:25.112 "write": true, 00:16:25.112 "unmap": true, 00:16:25.112 "flush": true, 00:16:25.112 "reset": true, 00:16:25.113 "nvme_admin": false, 00:16:25.113 "nvme_io": false, 00:16:25.113 "nvme_io_md": false, 00:16:25.113 "write_zeroes": true, 00:16:25.113 "zcopy": true, 00:16:25.113 "get_zone_info": false, 00:16:25.113 "zone_management": false, 00:16:25.113 "zone_append": false, 00:16:25.113 "compare": false, 00:16:25.113 "compare_and_write": false, 00:16:25.113 "abort": true, 00:16:25.113 "seek_hole": false, 00:16:25.113 "seek_data": false, 00:16:25.113 "copy": true, 00:16:25.113 "nvme_iov_md": false 00:16:25.113 }, 00:16:25.113 "memory_domains": [ 00:16:25.113 { 00:16:25.113 "dma_device_id": "system", 00:16:25.113 "dma_device_type": 1 00:16:25.113 }, 00:16:25.113 { 00:16:25.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.113 "dma_device_type": 2 00:16:25.113 } 00:16:25.113 ], 00:16:25.113 "driver_specific": {} 00:16:25.113 } 00:16:25.113 ] 00:16:25.113 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.113 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:25.113 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:25.113 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.382 17:08:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.383 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.383 17:08:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.383 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.383 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.383 "name": "Existed_Raid", 00:16:25.383 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:25.383 "strip_size_kb": 64, 00:16:25.383 "state": "online", 00:16:25.383 "raid_level": "raid5f", 00:16:25.383 "superblock": true, 00:16:25.383 "num_base_bdevs": 4, 00:16:25.383 "num_base_bdevs_discovered": 4, 00:16:25.383 "num_base_bdevs_operational": 4, 00:16:25.383 "base_bdevs_list": [ 00:16:25.383 { 00:16:25.383 "name": "NewBaseBdev", 00:16:25.383 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:25.383 "is_configured": true, 00:16:25.383 "data_offset": 2048, 00:16:25.383 "data_size": 63488 00:16:25.383 }, 00:16:25.383 { 00:16:25.383 "name": "BaseBdev2", 00:16:25.383 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:25.383 "is_configured": true, 00:16:25.383 "data_offset": 2048, 00:16:25.383 "data_size": 63488 00:16:25.383 }, 00:16:25.383 { 00:16:25.383 "name": "BaseBdev3", 00:16:25.383 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:25.383 "is_configured": true, 00:16:25.383 "data_offset": 2048, 00:16:25.383 "data_size": 63488 00:16:25.383 }, 00:16:25.383 { 00:16:25.383 "name": "BaseBdev4", 00:16:25.383 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:25.383 "is_configured": true, 00:16:25.383 "data_offset": 2048, 00:16:25.383 "data_size": 63488 00:16:25.383 } 00:16:25.383 ] 00:16:25.383 }' 00:16:25.384 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.384 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.650 [2024-11-20 17:08:49.486497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.650 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.908 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.908 "name": "Existed_Raid", 00:16:25.908 "aliases": [ 00:16:25.908 "e20d470a-805c-4e43-9e2f-a73df0fcb925" 00:16:25.908 ], 00:16:25.908 "product_name": "Raid Volume", 00:16:25.908 "block_size": 512, 00:16:25.908 "num_blocks": 190464, 00:16:25.908 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:25.908 "assigned_rate_limits": { 00:16:25.908 "rw_ios_per_sec": 0, 00:16:25.908 "rw_mbytes_per_sec": 0, 00:16:25.908 "r_mbytes_per_sec": 0, 00:16:25.908 "w_mbytes_per_sec": 0 00:16:25.908 }, 00:16:25.908 "claimed": false, 00:16:25.908 "zoned": false, 00:16:25.908 "supported_io_types": { 00:16:25.908 "read": true, 00:16:25.908 "write": true, 00:16:25.908 "unmap": false, 00:16:25.909 "flush": false, 00:16:25.909 "reset": true, 00:16:25.909 "nvme_admin": false, 00:16:25.909 "nvme_io": false, 00:16:25.909 "nvme_io_md": false, 00:16:25.909 "write_zeroes": true, 00:16:25.909 "zcopy": false, 00:16:25.909 "get_zone_info": false, 00:16:25.909 "zone_management": false, 00:16:25.909 "zone_append": false, 00:16:25.909 "compare": false, 00:16:25.909 "compare_and_write": false, 00:16:25.909 "abort": false, 00:16:25.909 "seek_hole": false, 00:16:25.909 "seek_data": false, 00:16:25.909 "copy": false, 00:16:25.909 "nvme_iov_md": false 00:16:25.909 }, 00:16:25.909 "driver_specific": { 00:16:25.909 "raid": { 00:16:25.909 "uuid": "e20d470a-805c-4e43-9e2f-a73df0fcb925", 00:16:25.909 "strip_size_kb": 64, 00:16:25.909 "state": "online", 00:16:25.909 "raid_level": "raid5f", 00:16:25.909 "superblock": true, 00:16:25.909 "num_base_bdevs": 4, 00:16:25.909 "num_base_bdevs_discovered": 4, 00:16:25.909 "num_base_bdevs_operational": 4, 00:16:25.909 "base_bdevs_list": [ 00:16:25.909 { 00:16:25.909 "name": "NewBaseBdev", 00:16:25.909 "uuid": "aef8f6a6-0d1c-4a2e-b964-56d2222be6ca", 00:16:25.909 "is_configured": true, 00:16:25.909 "data_offset": 2048, 00:16:25.909 "data_size": 63488 00:16:25.909 }, 00:16:25.909 { 00:16:25.909 "name": "BaseBdev2", 00:16:25.909 "uuid": "c7162282-a64a-4beb-b8de-27d76a79a978", 00:16:25.909 "is_configured": true, 00:16:25.909 "data_offset": 2048, 00:16:25.909 "data_size": 63488 00:16:25.909 }, 00:16:25.909 { 00:16:25.909 "name": "BaseBdev3", 00:16:25.909 "uuid": "9e4d9a2d-1c20-42f9-81f1-06b33da5b9b0", 00:16:25.909 "is_configured": true, 00:16:25.909 "data_offset": 2048, 00:16:25.909 "data_size": 63488 00:16:25.909 }, 00:16:25.909 { 00:16:25.909 "name": "BaseBdev4", 00:16:25.909 "uuid": "78f3e197-e8fe-409d-b7c3-886d23a2d94f", 00:16:25.909 "is_configured": true, 00:16:25.909 "data_offset": 2048, 00:16:25.909 "data_size": 63488 00:16:25.909 } 00:16:25.909 ] 00:16:25.909 } 00:16:25.909 } 00:16:25.909 }' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.909 BaseBdev2 00:16:25.909 BaseBdev3 00:16:25.909 BaseBdev4' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.909 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.167 [2024-11-20 17:08:49.862346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.167 [2024-11-20 17:08:49.862379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.167 [2024-11-20 17:08:49.862458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.167 [2024-11-20 17:08:49.862933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.167 [2024-11-20 17:08:49.862952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83608 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83608 ']' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83608 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83608 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83608' 00:16:26.167 killing process with pid 83608 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83608 00:16:26.167 [2024-11-20 17:08:49.905252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.167 17:08:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83608 00:16:26.426 [2024-11-20 17:08:50.215242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.799 17:08:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.799 00:16:27.799 real 0m12.996s 00:16:27.799 user 0m21.549s 00:16:27.799 sys 0m1.812s 00:16:27.799 17:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.799 17:08:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.799 ************************************ 00:16:27.799 END TEST raid5f_state_function_test_sb 00:16:27.799 ************************************ 00:16:27.799 17:08:51 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:27.799 17:08:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:27.799 17:08:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.799 17:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.799 ************************************ 00:16:27.799 START TEST raid5f_superblock_test 00:16:27.799 ************************************ 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84293 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84293 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84293 ']' 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.799 17:08:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.799 [2024-11-20 17:08:51.412580] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:16:27.799 [2024-11-20 17:08:51.413074] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84293 ] 00:16:27.799 [2024-11-20 17:08:51.584555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.057 [2024-11-20 17:08:51.719213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.057 [2024-11-20 17:08:51.911941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.057 [2024-11-20 17:08:51.912044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.623 malloc1 00:16:28.623 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.624 [2024-11-20 17:08:52.421931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.624 [2024-11-20 17:08:52.422344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.624 [2024-11-20 17:08:52.422390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.624 [2024-11-20 17:08:52.422409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.624 [2024-11-20 17:08:52.425215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.624 [2024-11-20 17:08:52.425258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.624 pt1 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.624 malloc2 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.624 [2024-11-20 17:08:52.473728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.624 [2024-11-20 17:08:52.473854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.624 [2024-11-20 17:08:52.473913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.624 [2024-11-20 17:08:52.473929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.624 [2024-11-20 17:08:52.476545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.624 [2024-11-20 17:08:52.476943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.624 pt2 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.624 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 malloc3 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 [2024-11-20 17:08:52.553360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.883 [2024-11-20 17:08:52.553450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.883 [2024-11-20 17:08:52.553517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:28.883 [2024-11-20 17:08:52.553537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.883 [2024-11-20 17:08:52.557372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.883 [2024-11-20 17:08:52.557426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.883 pt3 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 malloc4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 [2024-11-20 17:08:52.616349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.883 [2024-11-20 17:08:52.616442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.883 [2024-11-20 17:08:52.616484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:28.883 [2024-11-20 17:08:52.616503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.883 [2024-11-20 17:08:52.620010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.883 [2024-11-20 17:08:52.620064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.883 pt4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 [2024-11-20 17:08:52.628452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.883 [2024-11-20 17:08:52.631537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.883 [2024-11-20 17:08:52.632023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.883 [2024-11-20 17:08:52.632127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.883 [2024-11-20 17:08:52.632453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:28.883 [2024-11-20 17:08:52.632481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.883 [2024-11-20 17:08:52.632898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:28.883 [2024-11-20 17:08:52.641411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:28.883 [2024-11-20 17:08:52.641451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:28.883 [2024-11-20 17:08:52.641787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:28.883 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.884 "name": "raid_bdev1", 00:16:28.884 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:28.884 "strip_size_kb": 64, 00:16:28.884 "state": "online", 00:16:28.884 "raid_level": "raid5f", 00:16:28.884 "superblock": true, 00:16:28.884 "num_base_bdevs": 4, 00:16:28.884 "num_base_bdevs_discovered": 4, 00:16:28.884 "num_base_bdevs_operational": 4, 00:16:28.884 "base_bdevs_list": [ 00:16:28.884 { 00:16:28.884 "name": "pt1", 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.884 "is_configured": true, 00:16:28.884 "data_offset": 2048, 00:16:28.884 "data_size": 63488 00:16:28.884 }, 00:16:28.884 { 00:16:28.884 "name": "pt2", 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.884 "is_configured": true, 00:16:28.884 "data_offset": 2048, 00:16:28.884 "data_size": 63488 00:16:28.884 }, 00:16:28.884 { 00:16:28.884 "name": "pt3", 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.884 "is_configured": true, 00:16:28.884 "data_offset": 2048, 00:16:28.884 "data_size": 63488 00:16:28.884 }, 00:16:28.884 { 00:16:28.884 "name": "pt4", 00:16:28.884 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.884 "is_configured": true, 00:16:28.884 "data_offset": 2048, 00:16:28.884 "data_size": 63488 00:16:28.884 } 00:16:28.884 ] 00:16:28.884 }' 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.884 17:08:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.453 [2024-11-20 17:08:53.167626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:29.453 "name": "raid_bdev1", 00:16:29.453 "aliases": [ 00:16:29.453 "c8323fb5-5495-4769-b120-ea8571cca5c5" 00:16:29.453 ], 00:16:29.453 "product_name": "Raid Volume", 00:16:29.453 "block_size": 512, 00:16:29.453 "num_blocks": 190464, 00:16:29.453 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:29.453 "assigned_rate_limits": { 00:16:29.453 "rw_ios_per_sec": 0, 00:16:29.453 "rw_mbytes_per_sec": 0, 00:16:29.453 "r_mbytes_per_sec": 0, 00:16:29.453 "w_mbytes_per_sec": 0 00:16:29.453 }, 00:16:29.453 "claimed": false, 00:16:29.453 "zoned": false, 00:16:29.453 "supported_io_types": { 00:16:29.453 "read": true, 00:16:29.453 "write": true, 00:16:29.453 "unmap": false, 00:16:29.453 "flush": false, 00:16:29.453 "reset": true, 00:16:29.453 "nvme_admin": false, 00:16:29.453 "nvme_io": false, 00:16:29.453 "nvme_io_md": false, 00:16:29.453 "write_zeroes": true, 00:16:29.453 "zcopy": false, 00:16:29.453 "get_zone_info": false, 00:16:29.453 "zone_management": false, 00:16:29.453 "zone_append": false, 00:16:29.453 "compare": false, 00:16:29.453 "compare_and_write": false, 00:16:29.453 "abort": false, 00:16:29.453 "seek_hole": false, 00:16:29.453 "seek_data": false, 00:16:29.453 "copy": false, 00:16:29.453 "nvme_iov_md": false 00:16:29.453 }, 00:16:29.453 "driver_specific": { 00:16:29.453 "raid": { 00:16:29.453 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:29.453 "strip_size_kb": 64, 00:16:29.453 "state": "online", 00:16:29.453 "raid_level": "raid5f", 00:16:29.453 "superblock": true, 00:16:29.453 "num_base_bdevs": 4, 00:16:29.453 "num_base_bdevs_discovered": 4, 00:16:29.453 "num_base_bdevs_operational": 4, 00:16:29.453 "base_bdevs_list": [ 00:16:29.453 { 00:16:29.453 "name": "pt1", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": "pt2", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": "pt3", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": "pt4", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 } 00:16:29.453 ] 00:16:29.453 } 00:16:29.453 } 00:16:29.453 }' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:29.453 pt2 00:16:29.453 pt3 00:16:29.453 pt4' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.453 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:29.713 [2024-11-20 17:08:53.523514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c8323fb5-5495-4769-b120-ea8571cca5c5 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c8323fb5-5495-4769-b120-ea8571cca5c5 ']' 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 [2024-11-20 17:08:53.571349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.713 [2024-11-20 17:08:53.571386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.713 [2024-11-20 17:08:53.571513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.713 [2024-11-20 17:08:53.571675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.713 [2024-11-20 17:08:53.571702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.713 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:29.977 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 [2024-11-20 17:08:53.735405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:29.978 [2024-11-20 17:08:53.738105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:29.978 [2024-11-20 17:08:53.738181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:29.978 [2024-11-20 17:08:53.738231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:29.978 [2024-11-20 17:08:53.738304] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:29.978 [2024-11-20 17:08:53.738383] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:29.978 [2024-11-20 17:08:53.738413] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:29.978 [2024-11-20 17:08:53.738441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:29.978 [2024-11-20 17:08:53.738461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.978 [2024-11-20 17:08:53.738485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:29.978 request: 00:16:29.978 { 00:16:29.978 "name": "raid_bdev1", 00:16:29.978 "raid_level": "raid5f", 00:16:29.978 "base_bdevs": [ 00:16:29.978 "malloc1", 00:16:29.978 "malloc2", 00:16:29.978 "malloc3", 00:16:29.978 "malloc4" 00:16:29.978 ], 00:16:29.978 "strip_size_kb": 64, 00:16:29.978 "superblock": false, 00:16:29.978 "method": "bdev_raid_create", 00:16:29.978 "req_id": 1 00:16:29.978 } 00:16:29.978 Got JSON-RPC error response 00:16:29.978 response: 00:16:29.978 { 00:16:29.978 "code": -17, 00:16:29.978 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:29.978 } 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 [2024-11-20 17:08:53.807383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.978 [2024-11-20 17:08:53.807605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.978 [2024-11-20 17:08:53.807819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:29.978 [2024-11-20 17:08:53.807977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.978 [2024-11-20 17:08:53.810884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.978 [2024-11-20 17:08:53.811053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.978 [2024-11-20 17:08:53.811252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.978 [2024-11-20 17:08:53.811429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.978 pt1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.255 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.255 "name": "raid_bdev1", 00:16:30.255 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:30.255 "strip_size_kb": 64, 00:16:30.255 "state": "configuring", 00:16:30.255 "raid_level": "raid5f", 00:16:30.255 "superblock": true, 00:16:30.255 "num_base_bdevs": 4, 00:16:30.255 "num_base_bdevs_discovered": 1, 00:16:30.255 "num_base_bdevs_operational": 4, 00:16:30.255 "base_bdevs_list": [ 00:16:30.255 { 00:16:30.255 "name": "pt1", 00:16:30.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.255 "is_configured": true, 00:16:30.255 "data_offset": 2048, 00:16:30.255 "data_size": 63488 00:16:30.255 }, 00:16:30.255 { 00:16:30.255 "name": null, 00:16:30.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.255 "is_configured": false, 00:16:30.255 "data_offset": 2048, 00:16:30.255 "data_size": 63488 00:16:30.255 }, 00:16:30.255 { 00:16:30.255 "name": null, 00:16:30.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.255 "is_configured": false, 00:16:30.255 "data_offset": 2048, 00:16:30.255 "data_size": 63488 00:16:30.255 }, 00:16:30.255 { 00:16:30.255 "name": null, 00:16:30.255 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.255 "is_configured": false, 00:16:30.255 "data_offset": 2048, 00:16:30.255 "data_size": 63488 00:16:30.255 } 00:16:30.255 ] 00:16:30.255 }' 00:16:30.255 17:08:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.255 17:08:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.512 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:30.512 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.513 [2024-11-20 17:08:54.328017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.513 [2024-11-20 17:08:54.328461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.513 [2024-11-20 17:08:54.328502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:30.513 [2024-11-20 17:08:54.328529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.513 [2024-11-20 17:08:54.329186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.513 [2024-11-20 17:08:54.329223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.513 [2024-11-20 17:08:54.329336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.513 [2024-11-20 17:08:54.329397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.513 pt2 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.513 [2024-11-20 17:08:54.335927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.513 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.771 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.771 "name": "raid_bdev1", 00:16:30.771 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:30.771 "strip_size_kb": 64, 00:16:30.771 "state": "configuring", 00:16:30.772 "raid_level": "raid5f", 00:16:30.772 "superblock": true, 00:16:30.772 "num_base_bdevs": 4, 00:16:30.772 "num_base_bdevs_discovered": 1, 00:16:30.772 "num_base_bdevs_operational": 4, 00:16:30.772 "base_bdevs_list": [ 00:16:30.772 { 00:16:30.772 "name": "pt1", 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.772 "is_configured": true, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": null, 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.772 "is_configured": false, 00:16:30.772 "data_offset": 0, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": null, 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.772 "is_configured": false, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": null, 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.772 "is_configured": false, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 } 00:16:30.772 ] 00:16:30.772 }' 00:16:30.772 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.772 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 [2024-11-20 17:08:54.852138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.031 [2024-11-20 17:08:54.852544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.031 [2024-11-20 17:08:54.852589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:31.031 [2024-11-20 17:08:54.852605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.031 [2024-11-20 17:08:54.853287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.031 [2024-11-20 17:08:54.853317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.031 [2024-11-20 17:08:54.853453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.031 [2024-11-20 17:08:54.853493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.031 pt2 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.032 [2024-11-20 17:08:54.864030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:31.032 [2024-11-20 17:08:54.864081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.032 [2024-11-20 17:08:54.864113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:31.032 [2024-11-20 17:08:54.864127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.032 [2024-11-20 17:08:54.864525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.032 [2024-11-20 17:08:54.864553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:31.032 [2024-11-20 17:08:54.864624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:31.032 [2024-11-20 17:08:54.864656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.032 pt3 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.032 [2024-11-20 17:08:54.872026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:31.032 [2024-11-20 17:08:54.872087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.032 [2024-11-20 17:08:54.872111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:31.032 [2024-11-20 17:08:54.872124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.032 [2024-11-20 17:08:54.872543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.032 [2024-11-20 17:08:54.872575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:31.032 [2024-11-20 17:08:54.872647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:31.032 [2024-11-20 17:08:54.872685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:31.032 [2024-11-20 17:08:54.872875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.032 [2024-11-20 17:08:54.872890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:31.032 [2024-11-20 17:08:54.873176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:31.032 [2024-11-20 17:08:54.878731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.032 [2024-11-20 17:08:54.878761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:31.032 [2024-11-20 17:08:54.879015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.032 pt4 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.032 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.291 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.291 "name": "raid_bdev1", 00:16:31.291 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:31.291 "strip_size_kb": 64, 00:16:31.291 "state": "online", 00:16:31.291 "raid_level": "raid5f", 00:16:31.291 "superblock": true, 00:16:31.291 "num_base_bdevs": 4, 00:16:31.291 "num_base_bdevs_discovered": 4, 00:16:31.291 "num_base_bdevs_operational": 4, 00:16:31.291 "base_bdevs_list": [ 00:16:31.291 { 00:16:31.291 "name": "pt1", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt2", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt3", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt4", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 } 00:16:31.291 ] 00:16:31.291 }' 00:16:31.291 17:08:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.291 17:08:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.550 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.550 [2024-11-20 17:08:55.402473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.808 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.808 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.808 "name": "raid_bdev1", 00:16:31.808 "aliases": [ 00:16:31.808 "c8323fb5-5495-4769-b120-ea8571cca5c5" 00:16:31.808 ], 00:16:31.808 "product_name": "Raid Volume", 00:16:31.808 "block_size": 512, 00:16:31.808 "num_blocks": 190464, 00:16:31.808 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:31.808 "assigned_rate_limits": { 00:16:31.808 "rw_ios_per_sec": 0, 00:16:31.808 "rw_mbytes_per_sec": 0, 00:16:31.808 "r_mbytes_per_sec": 0, 00:16:31.808 "w_mbytes_per_sec": 0 00:16:31.808 }, 00:16:31.808 "claimed": false, 00:16:31.808 "zoned": false, 00:16:31.808 "supported_io_types": { 00:16:31.808 "read": true, 00:16:31.808 "write": true, 00:16:31.808 "unmap": false, 00:16:31.808 "flush": false, 00:16:31.808 "reset": true, 00:16:31.808 "nvme_admin": false, 00:16:31.808 "nvme_io": false, 00:16:31.808 "nvme_io_md": false, 00:16:31.808 "write_zeroes": true, 00:16:31.808 "zcopy": false, 00:16:31.808 "get_zone_info": false, 00:16:31.808 "zone_management": false, 00:16:31.808 "zone_append": false, 00:16:31.808 "compare": false, 00:16:31.808 "compare_and_write": false, 00:16:31.808 "abort": false, 00:16:31.808 "seek_hole": false, 00:16:31.808 "seek_data": false, 00:16:31.808 "copy": false, 00:16:31.808 "nvme_iov_md": false 00:16:31.808 }, 00:16:31.808 "driver_specific": { 00:16:31.808 "raid": { 00:16:31.808 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:31.808 "strip_size_kb": 64, 00:16:31.808 "state": "online", 00:16:31.808 "raid_level": "raid5f", 00:16:31.808 "superblock": true, 00:16:31.808 "num_base_bdevs": 4, 00:16:31.808 "num_base_bdevs_discovered": 4, 00:16:31.808 "num_base_bdevs_operational": 4, 00:16:31.808 "base_bdevs_list": [ 00:16:31.808 { 00:16:31.808 "name": "pt1", 00:16:31.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.808 "is_configured": true, 00:16:31.808 "data_offset": 2048, 00:16:31.808 "data_size": 63488 00:16:31.808 }, 00:16:31.808 { 00:16:31.808 "name": "pt2", 00:16:31.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.808 "is_configured": true, 00:16:31.808 "data_offset": 2048, 00:16:31.808 "data_size": 63488 00:16:31.808 }, 00:16:31.808 { 00:16:31.808 "name": "pt3", 00:16:31.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.808 "is_configured": true, 00:16:31.808 "data_offset": 2048, 00:16:31.808 "data_size": 63488 00:16:31.809 }, 00:16:31.809 { 00:16:31.809 "name": "pt4", 00:16:31.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.809 "is_configured": true, 00:16:31.809 "data_offset": 2048, 00:16:31.809 "data_size": 63488 00:16:31.809 } 00:16:31.809 ] 00:16:31.809 } 00:16:31.809 } 00:16:31.809 }' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.809 pt2 00:16:31.809 pt3 00:16:31.809 pt4' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.809 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 [2024-11-20 17:08:55.774301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c8323fb5-5495-4769-b120-ea8571cca5c5 '!=' c8323fb5-5495-4769-b120-ea8571cca5c5 ']' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 [2024-11-20 17:08:55.818286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.067 "name": "raid_bdev1", 00:16:32.067 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:32.067 "strip_size_kb": 64, 00:16:32.067 "state": "online", 00:16:32.067 "raid_level": "raid5f", 00:16:32.067 "superblock": true, 00:16:32.067 "num_base_bdevs": 4, 00:16:32.067 "num_base_bdevs_discovered": 3, 00:16:32.067 "num_base_bdevs_operational": 3, 00:16:32.067 "base_bdevs_list": [ 00:16:32.067 { 00:16:32.067 "name": null, 00:16:32.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.067 "is_configured": false, 00:16:32.067 "data_offset": 0, 00:16:32.067 "data_size": 63488 00:16:32.067 }, 00:16:32.067 { 00:16:32.067 "name": "pt2", 00:16:32.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 }, 00:16:32.067 { 00:16:32.067 "name": "pt3", 00:16:32.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 }, 00:16:32.067 { 00:16:32.067 "name": "pt4", 00:16:32.067 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.067 "is_configured": true, 00:16:32.067 "data_offset": 2048, 00:16:32.067 "data_size": 63488 00:16:32.067 } 00:16:32.067 ] 00:16:32.067 }' 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.067 17:08:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 [2024-11-20 17:08:56.338352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.635 [2024-11-20 17:08:56.338418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.635 [2024-11-20 17:08:56.338529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.635 [2024-11-20 17:08:56.338636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.635 [2024-11-20 17:08:56.338652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 [2024-11-20 17:08:56.430325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.635 [2024-11-20 17:08:56.430421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.635 [2024-11-20 17:08:56.430454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:32.635 [2024-11-20 17:08:56.430469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.635 pt2 00:16:32.635 [2024-11-20 17:08:56.434277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.635 [2024-11-20 17:08:56.434329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.635 [2024-11-20 17:08:56.434483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.635 [2024-11-20 17:08:56.434570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.635 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.635 "name": "raid_bdev1", 00:16:32.635 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:32.635 "strip_size_kb": 64, 00:16:32.635 "state": "configuring", 00:16:32.636 "raid_level": "raid5f", 00:16:32.636 "superblock": true, 00:16:32.636 "num_base_bdevs": 4, 00:16:32.636 "num_base_bdevs_discovered": 1, 00:16:32.636 "num_base_bdevs_operational": 3, 00:16:32.636 "base_bdevs_list": [ 00:16:32.636 { 00:16:32.636 "name": null, 00:16:32.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.636 "is_configured": false, 00:16:32.636 "data_offset": 2048, 00:16:32.636 "data_size": 63488 00:16:32.636 }, 00:16:32.636 { 00:16:32.636 "name": "pt2", 00:16:32.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.636 "is_configured": true, 00:16:32.636 "data_offset": 2048, 00:16:32.636 "data_size": 63488 00:16:32.636 }, 00:16:32.636 { 00:16:32.636 "name": null, 00:16:32.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.636 "is_configured": false, 00:16:32.636 "data_offset": 2048, 00:16:32.636 "data_size": 63488 00:16:32.636 }, 00:16:32.636 { 00:16:32.636 "name": null, 00:16:32.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.636 "is_configured": false, 00:16:32.636 "data_offset": 2048, 00:16:32.636 "data_size": 63488 00:16:32.636 } 00:16:32.636 ] 00:16:32.636 }' 00:16:32.636 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.636 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.201 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:33.201 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:33.201 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.202 [2024-11-20 17:08:56.970749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.202 [2024-11-20 17:08:56.970933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.202 [2024-11-20 17:08:56.970982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:33.202 [2024-11-20 17:08:56.970997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.202 [2024-11-20 17:08:56.971706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.202 [2024-11-20 17:08:56.971746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.202 [2024-11-20 17:08:56.971897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:33.202 [2024-11-20 17:08:56.971938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.202 pt3 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.202 17:08:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.202 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.202 "name": "raid_bdev1", 00:16:33.202 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:33.202 "strip_size_kb": 64, 00:16:33.202 "state": "configuring", 00:16:33.202 "raid_level": "raid5f", 00:16:33.202 "superblock": true, 00:16:33.202 "num_base_bdevs": 4, 00:16:33.202 "num_base_bdevs_discovered": 2, 00:16:33.202 "num_base_bdevs_operational": 3, 00:16:33.202 "base_bdevs_list": [ 00:16:33.202 { 00:16:33.202 "name": null, 00:16:33.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.202 "is_configured": false, 00:16:33.202 "data_offset": 2048, 00:16:33.202 "data_size": 63488 00:16:33.202 }, 00:16:33.202 { 00:16:33.202 "name": "pt2", 00:16:33.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.202 "is_configured": true, 00:16:33.202 "data_offset": 2048, 00:16:33.202 "data_size": 63488 00:16:33.202 }, 00:16:33.202 { 00:16:33.202 "name": "pt3", 00:16:33.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.202 "is_configured": true, 00:16:33.202 "data_offset": 2048, 00:16:33.202 "data_size": 63488 00:16:33.202 }, 00:16:33.202 { 00:16:33.202 "name": null, 00:16:33.202 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.202 "is_configured": false, 00:16:33.202 "data_offset": 2048, 00:16:33.202 "data_size": 63488 00:16:33.202 } 00:16:33.202 ] 00:16:33.202 }' 00:16:33.202 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.202 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.769 [2024-11-20 17:08:57.502874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.769 [2024-11-20 17:08:57.502990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.769 [2024-11-20 17:08:57.503027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:33.769 [2024-11-20 17:08:57.503042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.769 [2024-11-20 17:08:57.503660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.769 [2024-11-20 17:08:57.503684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.769 [2024-11-20 17:08:57.503810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:33.769 [2024-11-20 17:08:57.503851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.769 [2024-11-20 17:08:57.504050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:33.769 [2024-11-20 17:08:57.504065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.769 [2024-11-20 17:08:57.504360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:33.769 [2024-11-20 17:08:57.510054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:33.769 [2024-11-20 17:08:57.510083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:33.769 [2024-11-20 17:08:57.510412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.769 pt4 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.769 "name": "raid_bdev1", 00:16:33.769 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:33.769 "strip_size_kb": 64, 00:16:33.769 "state": "online", 00:16:33.769 "raid_level": "raid5f", 00:16:33.769 "superblock": true, 00:16:33.769 "num_base_bdevs": 4, 00:16:33.769 "num_base_bdevs_discovered": 3, 00:16:33.769 "num_base_bdevs_operational": 3, 00:16:33.769 "base_bdevs_list": [ 00:16:33.769 { 00:16:33.769 "name": null, 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.769 "is_configured": false, 00:16:33.769 "data_offset": 2048, 00:16:33.769 "data_size": 63488 00:16:33.769 }, 00:16:33.769 { 00:16:33.769 "name": "pt2", 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.769 "is_configured": true, 00:16:33.769 "data_offset": 2048, 00:16:33.769 "data_size": 63488 00:16:33.769 }, 00:16:33.769 { 00:16:33.769 "name": "pt3", 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.769 "is_configured": true, 00:16:33.769 "data_offset": 2048, 00:16:33.769 "data_size": 63488 00:16:33.769 }, 00:16:33.769 { 00:16:33.769 "name": "pt4", 00:16:33.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.769 "is_configured": true, 00:16:33.769 "data_offset": 2048, 00:16:33.769 "data_size": 63488 00:16:33.769 } 00:16:33.769 ] 00:16:33.769 }' 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.769 17:08:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.336 [2024-11-20 17:08:58.029413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.336 [2024-11-20 17:08:58.029477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.336 [2024-11-20 17:08:58.029589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.336 [2024-11-20 17:08:58.029687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.336 [2024-11-20 17:08:58.029707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.336 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.336 [2024-11-20 17:08:58.101362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.336 [2024-11-20 17:08:58.101442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.336 [2024-11-20 17:08:58.101483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:34.336 [2024-11-20 17:08:58.101502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.336 [2024-11-20 17:08:58.104312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.336 [2024-11-20 17:08:58.104638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.336 [2024-11-20 17:08:58.104770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:34.336 [2024-11-20 17:08:58.104853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.337 [2024-11-20 17:08:58.105027] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:34.337 [2024-11-20 17:08:58.105049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.337 [2024-11-20 17:08:58.105069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:34.337 [2024-11-20 17:08:58.105137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.337 [2024-11-20 17:08:58.105284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.337 pt1 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.337 "name": "raid_bdev1", 00:16:34.337 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:34.337 "strip_size_kb": 64, 00:16:34.337 "state": "configuring", 00:16:34.337 "raid_level": "raid5f", 00:16:34.337 "superblock": true, 00:16:34.337 "num_base_bdevs": 4, 00:16:34.337 "num_base_bdevs_discovered": 2, 00:16:34.337 "num_base_bdevs_operational": 3, 00:16:34.337 "base_bdevs_list": [ 00:16:34.337 { 00:16:34.337 "name": null, 00:16:34.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.337 "is_configured": false, 00:16:34.337 "data_offset": 2048, 00:16:34.337 "data_size": 63488 00:16:34.337 }, 00:16:34.337 { 00:16:34.337 "name": "pt2", 00:16:34.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.337 "is_configured": true, 00:16:34.337 "data_offset": 2048, 00:16:34.337 "data_size": 63488 00:16:34.337 }, 00:16:34.337 { 00:16:34.337 "name": "pt3", 00:16:34.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.337 "is_configured": true, 00:16:34.337 "data_offset": 2048, 00:16:34.337 "data_size": 63488 00:16:34.337 }, 00:16:34.337 { 00:16:34.337 "name": null, 00:16:34.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.337 "is_configured": false, 00:16:34.337 "data_offset": 2048, 00:16:34.337 "data_size": 63488 00:16:34.337 } 00:16:34.337 ] 00:16:34.337 }' 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.337 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.904 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.904 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:34.904 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.904 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.905 [2024-11-20 17:08:58.669615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:34.905 [2024-11-20 17:08:58.669730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.905 [2024-11-20 17:08:58.669780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:34.905 [2024-11-20 17:08:58.669808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.905 [2024-11-20 17:08:58.670541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.905 [2024-11-20 17:08:58.670567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:34.905 [2024-11-20 17:08:58.670680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:34.905 [2024-11-20 17:08:58.670715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:34.905 [2024-11-20 17:08:58.670992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:34.905 [2024-11-20 17:08:58.671009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.905 [2024-11-20 17:08:58.671318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:34.905 [2024-11-20 17:08:58.678118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:34.905 pt4 00:16:34.905 [2024-11-20 17:08:58.678439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:34.905 [2024-11-20 17:08:58.678915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.905 "name": "raid_bdev1", 00:16:34.905 "uuid": "c8323fb5-5495-4769-b120-ea8571cca5c5", 00:16:34.905 "strip_size_kb": 64, 00:16:34.905 "state": "online", 00:16:34.905 "raid_level": "raid5f", 00:16:34.905 "superblock": true, 00:16:34.905 "num_base_bdevs": 4, 00:16:34.905 "num_base_bdevs_discovered": 3, 00:16:34.905 "num_base_bdevs_operational": 3, 00:16:34.905 "base_bdevs_list": [ 00:16:34.905 { 00:16:34.905 "name": null, 00:16:34.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.905 "is_configured": false, 00:16:34.905 "data_offset": 2048, 00:16:34.905 "data_size": 63488 00:16:34.905 }, 00:16:34.905 { 00:16:34.905 "name": "pt2", 00:16:34.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.905 "is_configured": true, 00:16:34.905 "data_offset": 2048, 00:16:34.905 "data_size": 63488 00:16:34.905 }, 00:16:34.905 { 00:16:34.905 "name": "pt3", 00:16:34.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.905 "is_configured": true, 00:16:34.905 "data_offset": 2048, 00:16:34.905 "data_size": 63488 00:16:34.905 }, 00:16:34.905 { 00:16:34.905 "name": "pt4", 00:16:34.905 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.905 "is_configured": true, 00:16:34.905 "data_offset": 2048, 00:16:34.905 "data_size": 63488 00:16:34.905 } 00:16:34.905 ] 00:16:34.905 }' 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.905 17:08:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.472 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.473 [2024-11-20 17:08:59.267326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c8323fb5-5495-4769-b120-ea8571cca5c5 '!=' c8323fb5-5495-4769-b120-ea8571cca5c5 ']' 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84293 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84293 ']' 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84293 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.473 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84293 00:16:35.735 killing process with pid 84293 00:16:35.735 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.735 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.735 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84293' 00:16:35.735 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84293 00:16:35.735 [2024-11-20 17:08:59.344321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.735 17:08:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84293 00:16:35.735 [2024-11-20 17:08:59.344498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.735 [2024-11-20 17:08:59.344619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.735 [2024-11-20 17:08:59.344645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:35.993 [2024-11-20 17:08:59.663314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.369 ************************************ 00:16:37.369 END TEST raid5f_superblock_test 00:16:37.369 ************************************ 00:16:37.369 17:09:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:37.369 00:16:37.369 real 0m9.496s 00:16:37.369 user 0m15.515s 00:16:37.369 sys 0m1.344s 00:16:37.369 17:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.369 17:09:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.369 17:09:00 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:37.369 17:09:00 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:37.369 17:09:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:37.369 17:09:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.369 17:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.369 ************************************ 00:16:37.369 START TEST raid5f_rebuild_test 00:16:37.370 ************************************ 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84784 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84784 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84784 ']' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.370 17:09:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.370 [2024-11-20 17:09:00.987914] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:16:37.370 [2024-11-20 17:09:00.988373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84784 ] 00:16:37.370 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:37.370 Zero copy mechanism will not be used. 00:16:37.370 [2024-11-20 17:09:01.171059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.629 [2024-11-20 17:09:01.335340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.888 [2024-11-20 17:09:01.592255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.888 [2024-11-20 17:09:01.592565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.147 17:09:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.405 BaseBdev1_malloc 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.405 [2024-11-20 17:09:02.046188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:38.405 [2024-11-20 17:09:02.046330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.405 [2024-11-20 17:09:02.046368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.405 [2024-11-20 17:09:02.046388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.405 [2024-11-20 17:09:02.050117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.405 [2024-11-20 17:09:02.050205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.405 BaseBdev1 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.405 BaseBdev2_malloc 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.405 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 [2024-11-20 17:09:02.112573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:38.406 [2024-11-20 17:09:02.112954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.406 [2024-11-20 17:09:02.113019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.406 [2024-11-20 17:09:02.113073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.406 [2024-11-20 17:09:02.116446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.406 [2024-11-20 17:09:02.116702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:38.406 BaseBdev2 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 BaseBdev3_malloc 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 [2024-11-20 17:09:02.192782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:38.406 [2024-11-20 17:09:02.193175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.406 [2024-11-20 17:09:02.193220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:38.406 [2024-11-20 17:09:02.193258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.406 [2024-11-20 17:09:02.196971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.406 [2024-11-20 17:09:02.197185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:38.406 BaseBdev3 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 BaseBdev4_malloc 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 [2024-11-20 17:09:02.259231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:38.406 [2024-11-20 17:09:02.259546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.406 [2024-11-20 17:09:02.259642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:38.406 [2024-11-20 17:09:02.259802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.406 [2024-11-20 17:09:02.263369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.406 [2024-11-20 17:09:02.263600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:38.406 BaseBdev4 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.406 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 spare_malloc 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 spare_delay 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 [2024-11-20 17:09:02.331047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.665 [2024-11-20 17:09:02.331129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.665 [2024-11-20 17:09:02.331161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:38.665 [2024-11-20 17:09:02.331205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.665 [2024-11-20 17:09:02.334756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.665 [2024-11-20 17:09:02.334812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.665 spare 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 [2024-11-20 17:09:02.339160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.665 [2024-11-20 17:09:02.342336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.665 [2024-11-20 17:09:02.342442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.665 [2024-11-20 17:09:02.342587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:38.665 [2024-11-20 17:09:02.342714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:38.665 [2024-11-20 17:09:02.342736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:38.665 [2024-11-20 17:09:02.343184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:38.665 [2024-11-20 17:09:02.351010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:38.665 [2024-11-20 17:09:02.351036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:38.665 [2024-11-20 17:09:02.351375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.665 "name": "raid_bdev1", 00:16:38.665 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:38.665 "strip_size_kb": 64, 00:16:38.665 "state": "online", 00:16:38.665 "raid_level": "raid5f", 00:16:38.665 "superblock": false, 00:16:38.665 "num_base_bdevs": 4, 00:16:38.665 "num_base_bdevs_discovered": 4, 00:16:38.665 "num_base_bdevs_operational": 4, 00:16:38.665 "base_bdevs_list": [ 00:16:38.665 { 00:16:38.665 "name": "BaseBdev1", 00:16:38.665 "uuid": "a28bded2-5755-5c9a-a057-8f0cdef775f3", 00:16:38.665 "is_configured": true, 00:16:38.665 "data_offset": 0, 00:16:38.665 "data_size": 65536 00:16:38.665 }, 00:16:38.665 { 00:16:38.665 "name": "BaseBdev2", 00:16:38.665 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:38.665 "is_configured": true, 00:16:38.665 "data_offset": 0, 00:16:38.665 "data_size": 65536 00:16:38.665 }, 00:16:38.665 { 00:16:38.665 "name": "BaseBdev3", 00:16:38.665 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:38.665 "is_configured": true, 00:16:38.665 "data_offset": 0, 00:16:38.665 "data_size": 65536 00:16:38.665 }, 00:16:38.665 { 00:16:38.665 "name": "BaseBdev4", 00:16:38.665 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:38.665 "is_configured": true, 00:16:38.665 "data_offset": 0, 00:16:38.665 "data_size": 65536 00:16:38.665 } 00:16:38.665 ] 00:16:38.665 }' 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.665 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.233 [2024-11-20 17:09:02.889260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.233 17:09:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:39.492 [2024-11-20 17:09:03.269121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:39.492 /dev/nbd0 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.492 1+0 records in 00:16:39.492 1+0 records out 00:16:39.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029197 s, 14.0 MB/s 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:39.492 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:39.493 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:39.493 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:40.431 512+0 records in 00:16:40.431 512+0 records out 00:16:40.431 100663296 bytes (101 MB, 96 MiB) copied, 0.617693 s, 163 MB/s 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.431 17:09:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.431 [2024-11-20 17:09:04.239232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.431 [2024-11-20 17:09:04.283282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.431 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.690 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.690 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.690 "name": "raid_bdev1", 00:16:40.690 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:40.690 "strip_size_kb": 64, 00:16:40.690 "state": "online", 00:16:40.690 "raid_level": "raid5f", 00:16:40.690 "superblock": false, 00:16:40.690 "num_base_bdevs": 4, 00:16:40.690 "num_base_bdevs_discovered": 3, 00:16:40.690 "num_base_bdevs_operational": 3, 00:16:40.690 "base_bdevs_list": [ 00:16:40.690 { 00:16:40.690 "name": null, 00:16:40.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.690 "is_configured": false, 00:16:40.690 "data_offset": 0, 00:16:40.690 "data_size": 65536 00:16:40.690 }, 00:16:40.690 { 00:16:40.690 "name": "BaseBdev2", 00:16:40.690 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:40.690 "is_configured": true, 00:16:40.690 "data_offset": 0, 00:16:40.690 "data_size": 65536 00:16:40.690 }, 00:16:40.690 { 00:16:40.690 "name": "BaseBdev3", 00:16:40.690 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:40.690 "is_configured": true, 00:16:40.690 "data_offset": 0, 00:16:40.690 "data_size": 65536 00:16:40.690 }, 00:16:40.690 { 00:16:40.690 "name": "BaseBdev4", 00:16:40.690 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:40.690 "is_configured": true, 00:16:40.690 "data_offset": 0, 00:16:40.690 "data_size": 65536 00:16:40.690 } 00:16:40.690 ] 00:16:40.690 }' 00:16:40.690 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.690 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.958 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:40.958 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.958 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.958 [2024-11-20 17:09:04.791464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.958 [2024-11-20 17:09:04.806235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:40.958 17:09:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.958 17:09:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:40.958 [2024-11-20 17:09:04.815978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.348 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.348 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.349 "name": "raid_bdev1", 00:16:42.349 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:42.349 "strip_size_kb": 64, 00:16:42.349 "state": "online", 00:16:42.349 "raid_level": "raid5f", 00:16:42.349 "superblock": false, 00:16:42.349 "num_base_bdevs": 4, 00:16:42.349 "num_base_bdevs_discovered": 4, 00:16:42.349 "num_base_bdevs_operational": 4, 00:16:42.349 "process": { 00:16:42.349 "type": "rebuild", 00:16:42.349 "target": "spare", 00:16:42.349 "progress": { 00:16:42.349 "blocks": 17280, 00:16:42.349 "percent": 8 00:16:42.349 } 00:16:42.349 }, 00:16:42.349 "base_bdevs_list": [ 00:16:42.349 { 00:16:42.349 "name": "spare", 00:16:42.349 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev2", 00:16:42.349 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev3", 00:16:42.349 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev4", 00:16:42.349 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 } 00:16:42.349 ] 00:16:42.349 }' 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 17:09:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 [2024-11-20 17:09:05.997546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.349 [2024-11-20 17:09:06.027637] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.349 [2024-11-20 17:09:06.027725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.349 [2024-11-20 17:09:06.027753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.349 [2024-11-20 17:09:06.027790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.349 "name": "raid_bdev1", 00:16:42.349 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:42.349 "strip_size_kb": 64, 00:16:42.349 "state": "online", 00:16:42.349 "raid_level": "raid5f", 00:16:42.349 "superblock": false, 00:16:42.349 "num_base_bdevs": 4, 00:16:42.349 "num_base_bdevs_discovered": 3, 00:16:42.349 "num_base_bdevs_operational": 3, 00:16:42.349 "base_bdevs_list": [ 00:16:42.349 { 00:16:42.349 "name": null, 00:16:42.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.349 "is_configured": false, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev2", 00:16:42.349 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev3", 00:16:42.349 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 }, 00:16:42.349 { 00:16:42.349 "name": "BaseBdev4", 00:16:42.349 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:42.349 "is_configured": true, 00:16:42.349 "data_offset": 0, 00:16:42.349 "data_size": 65536 00:16:42.349 } 00:16:42.349 ] 00:16:42.349 }' 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.349 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.917 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.917 "name": "raid_bdev1", 00:16:42.917 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:42.917 "strip_size_kb": 64, 00:16:42.917 "state": "online", 00:16:42.917 "raid_level": "raid5f", 00:16:42.917 "superblock": false, 00:16:42.917 "num_base_bdevs": 4, 00:16:42.917 "num_base_bdevs_discovered": 3, 00:16:42.917 "num_base_bdevs_operational": 3, 00:16:42.917 "base_bdevs_list": [ 00:16:42.917 { 00:16:42.917 "name": null, 00:16:42.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.917 "is_configured": false, 00:16:42.917 "data_offset": 0, 00:16:42.917 "data_size": 65536 00:16:42.917 }, 00:16:42.917 { 00:16:42.917 "name": "BaseBdev2", 00:16:42.918 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:42.918 "is_configured": true, 00:16:42.918 "data_offset": 0, 00:16:42.918 "data_size": 65536 00:16:42.918 }, 00:16:42.918 { 00:16:42.918 "name": "BaseBdev3", 00:16:42.918 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:42.918 "is_configured": true, 00:16:42.918 "data_offset": 0, 00:16:42.918 "data_size": 65536 00:16:42.918 }, 00:16:42.918 { 00:16:42.918 "name": "BaseBdev4", 00:16:42.918 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:42.918 "is_configured": true, 00:16:42.918 "data_offset": 0, 00:16:42.918 "data_size": 65536 00:16:42.918 } 00:16:42.918 ] 00:16:42.918 }' 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.918 [2024-11-20 17:09:06.727571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.918 [2024-11-20 17:09:06.741951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.918 17:09:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:42.918 [2024-11-20 17:09:06.751996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.294 "name": "raid_bdev1", 00:16:44.294 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:44.294 "strip_size_kb": 64, 00:16:44.294 "state": "online", 00:16:44.294 "raid_level": "raid5f", 00:16:44.294 "superblock": false, 00:16:44.294 "num_base_bdevs": 4, 00:16:44.294 "num_base_bdevs_discovered": 4, 00:16:44.294 "num_base_bdevs_operational": 4, 00:16:44.294 "process": { 00:16:44.294 "type": "rebuild", 00:16:44.294 "target": "spare", 00:16:44.294 "progress": { 00:16:44.294 "blocks": 17280, 00:16:44.294 "percent": 8 00:16:44.294 } 00:16:44.294 }, 00:16:44.294 "base_bdevs_list": [ 00:16:44.294 { 00:16:44.294 "name": "spare", 00:16:44.294 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:44.294 "is_configured": true, 00:16:44.294 "data_offset": 0, 00:16:44.294 "data_size": 65536 00:16:44.294 }, 00:16:44.294 { 00:16:44.294 "name": "BaseBdev2", 00:16:44.294 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:44.294 "is_configured": true, 00:16:44.294 "data_offset": 0, 00:16:44.294 "data_size": 65536 00:16:44.294 }, 00:16:44.294 { 00:16:44.294 "name": "BaseBdev3", 00:16:44.294 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:44.294 "is_configured": true, 00:16:44.294 "data_offset": 0, 00:16:44.294 "data_size": 65536 00:16:44.294 }, 00:16:44.294 { 00:16:44.294 "name": "BaseBdev4", 00:16:44.294 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:44.294 "is_configured": true, 00:16:44.294 "data_offset": 0, 00:16:44.294 "data_size": 65536 00:16:44.294 } 00:16:44.294 ] 00:16:44.294 }' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=663 00:16:44.294 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.295 "name": "raid_bdev1", 00:16:44.295 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:44.295 "strip_size_kb": 64, 00:16:44.295 "state": "online", 00:16:44.295 "raid_level": "raid5f", 00:16:44.295 "superblock": false, 00:16:44.295 "num_base_bdevs": 4, 00:16:44.295 "num_base_bdevs_discovered": 4, 00:16:44.295 "num_base_bdevs_operational": 4, 00:16:44.295 "process": { 00:16:44.295 "type": "rebuild", 00:16:44.295 "target": "spare", 00:16:44.295 "progress": { 00:16:44.295 "blocks": 21120, 00:16:44.295 "percent": 10 00:16:44.295 } 00:16:44.295 }, 00:16:44.295 "base_bdevs_list": [ 00:16:44.295 { 00:16:44.295 "name": "spare", 00:16:44.295 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:44.295 "is_configured": true, 00:16:44.295 "data_offset": 0, 00:16:44.295 "data_size": 65536 00:16:44.295 }, 00:16:44.295 { 00:16:44.295 "name": "BaseBdev2", 00:16:44.295 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:44.295 "is_configured": true, 00:16:44.295 "data_offset": 0, 00:16:44.295 "data_size": 65536 00:16:44.295 }, 00:16:44.295 { 00:16:44.295 "name": "BaseBdev3", 00:16:44.295 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:44.295 "is_configured": true, 00:16:44.295 "data_offset": 0, 00:16:44.295 "data_size": 65536 00:16:44.295 }, 00:16:44.295 { 00:16:44.295 "name": "BaseBdev4", 00:16:44.295 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:44.295 "is_configured": true, 00:16:44.295 "data_offset": 0, 00:16:44.295 "data_size": 65536 00:16:44.295 } 00:16:44.295 ] 00:16:44.295 }' 00:16:44.295 17:09:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.295 17:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.295 17:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.295 17:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.295 17:09:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.230 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.231 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.231 17:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.231 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.231 17:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.493 "name": "raid_bdev1", 00:16:45.493 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:45.493 "strip_size_kb": 64, 00:16:45.493 "state": "online", 00:16:45.493 "raid_level": "raid5f", 00:16:45.493 "superblock": false, 00:16:45.493 "num_base_bdevs": 4, 00:16:45.493 "num_base_bdevs_discovered": 4, 00:16:45.493 "num_base_bdevs_operational": 4, 00:16:45.493 "process": { 00:16:45.493 "type": "rebuild", 00:16:45.493 "target": "spare", 00:16:45.493 "progress": { 00:16:45.493 "blocks": 44160, 00:16:45.493 "percent": 22 00:16:45.493 } 00:16:45.493 }, 00:16:45.493 "base_bdevs_list": [ 00:16:45.493 { 00:16:45.493 "name": "spare", 00:16:45.493 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:45.493 "is_configured": true, 00:16:45.493 "data_offset": 0, 00:16:45.493 "data_size": 65536 00:16:45.493 }, 00:16:45.493 { 00:16:45.493 "name": "BaseBdev2", 00:16:45.493 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:45.493 "is_configured": true, 00:16:45.493 "data_offset": 0, 00:16:45.493 "data_size": 65536 00:16:45.493 }, 00:16:45.493 { 00:16:45.493 "name": "BaseBdev3", 00:16:45.493 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:45.493 "is_configured": true, 00:16:45.493 "data_offset": 0, 00:16:45.493 "data_size": 65536 00:16:45.493 }, 00:16:45.493 { 00:16:45.493 "name": "BaseBdev4", 00:16:45.493 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:45.493 "is_configured": true, 00:16:45.493 "data_offset": 0, 00:16:45.493 "data_size": 65536 00:16:45.493 } 00:16:45.493 ] 00:16:45.493 }' 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.493 17:09:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.431 "name": "raid_bdev1", 00:16:46.431 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:46.431 "strip_size_kb": 64, 00:16:46.431 "state": "online", 00:16:46.431 "raid_level": "raid5f", 00:16:46.431 "superblock": false, 00:16:46.431 "num_base_bdevs": 4, 00:16:46.431 "num_base_bdevs_discovered": 4, 00:16:46.431 "num_base_bdevs_operational": 4, 00:16:46.431 "process": { 00:16:46.431 "type": "rebuild", 00:16:46.431 "target": "spare", 00:16:46.431 "progress": { 00:16:46.431 "blocks": 65280, 00:16:46.431 "percent": 33 00:16:46.431 } 00:16:46.431 }, 00:16:46.431 "base_bdevs_list": [ 00:16:46.431 { 00:16:46.431 "name": "spare", 00:16:46.431 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:46.431 "is_configured": true, 00:16:46.431 "data_offset": 0, 00:16:46.431 "data_size": 65536 00:16:46.431 }, 00:16:46.431 { 00:16:46.431 "name": "BaseBdev2", 00:16:46.431 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:46.431 "is_configured": true, 00:16:46.431 "data_offset": 0, 00:16:46.431 "data_size": 65536 00:16:46.431 }, 00:16:46.431 { 00:16:46.431 "name": "BaseBdev3", 00:16:46.431 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:46.431 "is_configured": true, 00:16:46.431 "data_offset": 0, 00:16:46.431 "data_size": 65536 00:16:46.431 }, 00:16:46.431 { 00:16:46.431 "name": "BaseBdev4", 00:16:46.431 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:46.431 "is_configured": true, 00:16:46.431 "data_offset": 0, 00:16:46.431 "data_size": 65536 00:16:46.431 } 00:16:46.431 ] 00:16:46.431 }' 00:16:46.431 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.689 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.689 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.689 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.689 17:09:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.625 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.625 "name": "raid_bdev1", 00:16:47.625 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:47.625 "strip_size_kb": 64, 00:16:47.625 "state": "online", 00:16:47.625 "raid_level": "raid5f", 00:16:47.625 "superblock": false, 00:16:47.625 "num_base_bdevs": 4, 00:16:47.625 "num_base_bdevs_discovered": 4, 00:16:47.625 "num_base_bdevs_operational": 4, 00:16:47.625 "process": { 00:16:47.625 "type": "rebuild", 00:16:47.625 "target": "spare", 00:16:47.625 "progress": { 00:16:47.625 "blocks": 88320, 00:16:47.625 "percent": 44 00:16:47.625 } 00:16:47.625 }, 00:16:47.625 "base_bdevs_list": [ 00:16:47.625 { 00:16:47.625 "name": "spare", 00:16:47.626 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:47.626 "is_configured": true, 00:16:47.626 "data_offset": 0, 00:16:47.626 "data_size": 65536 00:16:47.626 }, 00:16:47.626 { 00:16:47.626 "name": "BaseBdev2", 00:16:47.626 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:47.626 "is_configured": true, 00:16:47.626 "data_offset": 0, 00:16:47.626 "data_size": 65536 00:16:47.626 }, 00:16:47.626 { 00:16:47.626 "name": "BaseBdev3", 00:16:47.626 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:47.626 "is_configured": true, 00:16:47.626 "data_offset": 0, 00:16:47.626 "data_size": 65536 00:16:47.626 }, 00:16:47.626 { 00:16:47.626 "name": "BaseBdev4", 00:16:47.626 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:47.626 "is_configured": true, 00:16:47.626 "data_offset": 0, 00:16:47.626 "data_size": 65536 00:16:47.626 } 00:16:47.626 ] 00:16:47.626 }' 00:16:47.626 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.885 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.885 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.885 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.885 17:09:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.822 "name": "raid_bdev1", 00:16:48.822 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:48.822 "strip_size_kb": 64, 00:16:48.822 "state": "online", 00:16:48.822 "raid_level": "raid5f", 00:16:48.822 "superblock": false, 00:16:48.822 "num_base_bdevs": 4, 00:16:48.822 "num_base_bdevs_discovered": 4, 00:16:48.822 "num_base_bdevs_operational": 4, 00:16:48.822 "process": { 00:16:48.822 "type": "rebuild", 00:16:48.822 "target": "spare", 00:16:48.822 "progress": { 00:16:48.822 "blocks": 109440, 00:16:48.822 "percent": 55 00:16:48.822 } 00:16:48.822 }, 00:16:48.822 "base_bdevs_list": [ 00:16:48.822 { 00:16:48.822 "name": "spare", 00:16:48.822 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:48.822 "is_configured": true, 00:16:48.822 "data_offset": 0, 00:16:48.822 "data_size": 65536 00:16:48.822 }, 00:16:48.822 { 00:16:48.822 "name": "BaseBdev2", 00:16:48.822 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:48.822 "is_configured": true, 00:16:48.822 "data_offset": 0, 00:16:48.822 "data_size": 65536 00:16:48.822 }, 00:16:48.822 { 00:16:48.822 "name": "BaseBdev3", 00:16:48.822 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:48.822 "is_configured": true, 00:16:48.822 "data_offset": 0, 00:16:48.822 "data_size": 65536 00:16:48.822 }, 00:16:48.822 { 00:16:48.822 "name": "BaseBdev4", 00:16:48.822 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:48.822 "is_configured": true, 00:16:48.822 "data_offset": 0, 00:16:48.822 "data_size": 65536 00:16:48.822 } 00:16:48.822 ] 00:16:48.822 }' 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.822 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.081 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.081 17:09:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.017 "name": "raid_bdev1", 00:16:50.017 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:50.017 "strip_size_kb": 64, 00:16:50.017 "state": "online", 00:16:50.017 "raid_level": "raid5f", 00:16:50.017 "superblock": false, 00:16:50.017 "num_base_bdevs": 4, 00:16:50.017 "num_base_bdevs_discovered": 4, 00:16:50.017 "num_base_bdevs_operational": 4, 00:16:50.017 "process": { 00:16:50.017 "type": "rebuild", 00:16:50.017 "target": "spare", 00:16:50.017 "progress": { 00:16:50.017 "blocks": 132480, 00:16:50.017 "percent": 67 00:16:50.017 } 00:16:50.017 }, 00:16:50.017 "base_bdevs_list": [ 00:16:50.017 { 00:16:50.017 "name": "spare", 00:16:50.017 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 0, 00:16:50.017 "data_size": 65536 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev2", 00:16:50.017 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 0, 00:16:50.017 "data_size": 65536 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev3", 00:16:50.017 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 0, 00:16:50.017 "data_size": 65536 00:16:50.017 }, 00:16:50.017 { 00:16:50.017 "name": "BaseBdev4", 00:16:50.017 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:50.017 "is_configured": true, 00:16:50.017 "data_offset": 0, 00:16:50.017 "data_size": 65536 00:16:50.017 } 00:16:50.017 ] 00:16:50.017 }' 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.017 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.276 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.276 17:09:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.213 "name": "raid_bdev1", 00:16:51.213 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:51.213 "strip_size_kb": 64, 00:16:51.213 "state": "online", 00:16:51.213 "raid_level": "raid5f", 00:16:51.213 "superblock": false, 00:16:51.213 "num_base_bdevs": 4, 00:16:51.213 "num_base_bdevs_discovered": 4, 00:16:51.213 "num_base_bdevs_operational": 4, 00:16:51.213 "process": { 00:16:51.213 "type": "rebuild", 00:16:51.213 "target": "spare", 00:16:51.213 "progress": { 00:16:51.213 "blocks": 153600, 00:16:51.213 "percent": 78 00:16:51.213 } 00:16:51.213 }, 00:16:51.213 "base_bdevs_list": [ 00:16:51.213 { 00:16:51.213 "name": "spare", 00:16:51.213 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:51.213 "is_configured": true, 00:16:51.213 "data_offset": 0, 00:16:51.213 "data_size": 65536 00:16:51.213 }, 00:16:51.213 { 00:16:51.213 "name": "BaseBdev2", 00:16:51.213 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:51.213 "is_configured": true, 00:16:51.213 "data_offset": 0, 00:16:51.213 "data_size": 65536 00:16:51.213 }, 00:16:51.213 { 00:16:51.213 "name": "BaseBdev3", 00:16:51.213 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:51.213 "is_configured": true, 00:16:51.213 "data_offset": 0, 00:16:51.213 "data_size": 65536 00:16:51.213 }, 00:16:51.213 { 00:16:51.213 "name": "BaseBdev4", 00:16:51.213 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:51.213 "is_configured": true, 00:16:51.213 "data_offset": 0, 00:16:51.213 "data_size": 65536 00:16:51.213 } 00:16:51.213 ] 00:16:51.213 }' 00:16:51.213 17:09:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.213 17:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.213 17:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.213 17:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.213 17:09:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.641 "name": "raid_bdev1", 00:16:52.641 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:52.641 "strip_size_kb": 64, 00:16:52.641 "state": "online", 00:16:52.641 "raid_level": "raid5f", 00:16:52.641 "superblock": false, 00:16:52.641 "num_base_bdevs": 4, 00:16:52.641 "num_base_bdevs_discovered": 4, 00:16:52.641 "num_base_bdevs_operational": 4, 00:16:52.641 "process": { 00:16:52.641 "type": "rebuild", 00:16:52.641 "target": "spare", 00:16:52.641 "progress": { 00:16:52.641 "blocks": 176640, 00:16:52.641 "percent": 89 00:16:52.641 } 00:16:52.641 }, 00:16:52.641 "base_bdevs_list": [ 00:16:52.641 { 00:16:52.641 "name": "spare", 00:16:52.641 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:52.641 "is_configured": true, 00:16:52.641 "data_offset": 0, 00:16:52.641 "data_size": 65536 00:16:52.641 }, 00:16:52.641 { 00:16:52.641 "name": "BaseBdev2", 00:16:52.641 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:52.641 "is_configured": true, 00:16:52.641 "data_offset": 0, 00:16:52.641 "data_size": 65536 00:16:52.641 }, 00:16:52.641 { 00:16:52.641 "name": "BaseBdev3", 00:16:52.641 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:52.641 "is_configured": true, 00:16:52.641 "data_offset": 0, 00:16:52.641 "data_size": 65536 00:16:52.641 }, 00:16:52.641 { 00:16:52.641 "name": "BaseBdev4", 00:16:52.641 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:52.641 "is_configured": true, 00:16:52.641 "data_offset": 0, 00:16:52.641 "data_size": 65536 00:16:52.641 } 00:16:52.641 ] 00:16:52.641 }' 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.641 17:09:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.578 [2024-11-20 17:09:17.146536] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:53.578 [2024-11-20 17:09:17.146636] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:53.578 [2024-11-20 17:09:17.146697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.578 "name": "raid_bdev1", 00:16:53.578 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:53.578 "strip_size_kb": 64, 00:16:53.578 "state": "online", 00:16:53.578 "raid_level": "raid5f", 00:16:53.578 "superblock": false, 00:16:53.578 "num_base_bdevs": 4, 00:16:53.578 "num_base_bdevs_discovered": 4, 00:16:53.578 "num_base_bdevs_operational": 4, 00:16:53.578 "base_bdevs_list": [ 00:16:53.578 { 00:16:53.578 "name": "spare", 00:16:53.578 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:53.578 "is_configured": true, 00:16:53.578 "data_offset": 0, 00:16:53.578 "data_size": 65536 00:16:53.578 }, 00:16:53.578 { 00:16:53.578 "name": "BaseBdev2", 00:16:53.578 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:53.578 "is_configured": true, 00:16:53.578 "data_offset": 0, 00:16:53.578 "data_size": 65536 00:16:53.578 }, 00:16:53.578 { 00:16:53.578 "name": "BaseBdev3", 00:16:53.578 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:53.578 "is_configured": true, 00:16:53.578 "data_offset": 0, 00:16:53.578 "data_size": 65536 00:16:53.578 }, 00:16:53.578 { 00:16:53.578 "name": "BaseBdev4", 00:16:53.578 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:53.578 "is_configured": true, 00:16:53.578 "data_offset": 0, 00:16:53.578 "data_size": 65536 00:16:53.578 } 00:16:53.578 ] 00:16:53.578 }' 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.578 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.579 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.579 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.579 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.579 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.838 "name": "raid_bdev1", 00:16:53.838 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:53.838 "strip_size_kb": 64, 00:16:53.838 "state": "online", 00:16:53.838 "raid_level": "raid5f", 00:16:53.838 "superblock": false, 00:16:53.838 "num_base_bdevs": 4, 00:16:53.838 "num_base_bdevs_discovered": 4, 00:16:53.838 "num_base_bdevs_operational": 4, 00:16:53.838 "base_bdevs_list": [ 00:16:53.838 { 00:16:53.838 "name": "spare", 00:16:53.838 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev2", 00:16:53.838 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev3", 00:16:53.838 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev4", 00:16:53.838 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 } 00:16:53.838 ] 00:16:53.838 }' 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.838 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.838 "name": "raid_bdev1", 00:16:53.838 "uuid": "166db7d0-b663-4171-bbab-f4c6ef67b2a9", 00:16:53.838 "strip_size_kb": 64, 00:16:53.838 "state": "online", 00:16:53.838 "raid_level": "raid5f", 00:16:53.838 "superblock": false, 00:16:53.838 "num_base_bdevs": 4, 00:16:53.838 "num_base_bdevs_discovered": 4, 00:16:53.838 "num_base_bdevs_operational": 4, 00:16:53.838 "base_bdevs_list": [ 00:16:53.838 { 00:16:53.838 "name": "spare", 00:16:53.838 "uuid": "007636eb-1d20-5c95-953f-fd34e9a4b669", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev2", 00:16:53.838 "uuid": "665fc75b-6a79-56a8-9455-67087154871b", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev3", 00:16:53.838 "uuid": "1132880f-6f56-59d3-93ac-1d915907d09f", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 }, 00:16:53.838 { 00:16:53.838 "name": "BaseBdev4", 00:16:53.838 "uuid": "6e54b218-2ba1-5fdc-a240-5198855c69c6", 00:16:53.838 "is_configured": true, 00:16:53.838 "data_offset": 0, 00:16:53.838 "data_size": 65536 00:16:53.838 } 00:16:53.838 ] 00:16:53.838 }' 00:16:53.839 17:09:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.839 17:09:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.406 [2024-11-20 17:09:18.121350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.406 [2024-11-20 17:09:18.121393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.406 [2024-11-20 17:09:18.121488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.406 [2024-11-20 17:09:18.121598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.406 [2024-11-20 17:09:18.121615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:54.406 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:54.674 /dev/nbd0 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.674 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.675 1+0 records in 00:16:54.675 1+0 records out 00:16:54.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299999 s, 13.7 MB/s 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:54.675 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:55.247 /dev/nbd1 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.247 1+0 records in 00:16:55.247 1+0 records out 00:16:55.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406684 s, 10.1 MB/s 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:55.247 17:09:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.247 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.506 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84784 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84784 ']' 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84784 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84784 00:16:55.765 killing process with pid 84784 00:16:55.765 Received shutdown signal, test time was about 60.000000 seconds 00:16:55.765 00:16:55.765 Latency(us) 00:16:55.765 [2024-11-20T17:09:19.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.765 [2024-11-20T17:09:19.634Z] =================================================================================================================== 00:16:55.765 [2024-11-20T17:09:19.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84784' 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84784 00:16:55.765 17:09:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84784 00:16:55.765 [2024-11-20 17:09:19.601667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.333 [2024-11-20 17:09:20.009574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:57.270 00:16:57.270 real 0m20.167s 00:16:57.270 user 0m24.955s 00:16:57.270 sys 0m2.409s 00:16:57.270 ************************************ 00:16:57.270 END TEST raid5f_rebuild_test 00:16:57.270 ************************************ 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.270 17:09:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:57.270 17:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:57.270 17:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.270 17:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.270 ************************************ 00:16:57.270 START TEST raid5f_rebuild_test_sb 00:16:57.270 ************************************ 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85293 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85293 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85293 ']' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.270 17:09:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 [2024-11-20 17:09:21.212517] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:16:57.530 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:57.530 Zero copy mechanism will not be used. 00:16:57.530 [2024-11-20 17:09:21.212960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85293 ] 00:16:57.530 [2024-11-20 17:09:21.388333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.789 [2024-11-20 17:09:21.526910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.047 [2024-11-20 17:09:21.726482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.047 [2024-11-20 17:09:21.726559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 BaseBdev1_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 [2024-11-20 17:09:22.255954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.614 [2024-11-20 17:09:22.256254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.614 [2024-11-20 17:09:22.256295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.614 [2024-11-20 17:09:22.256315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.614 [2024-11-20 17:09:22.259344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.614 [2024-11-20 17:09:22.259548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.614 BaseBdev1 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 BaseBdev2_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 [2024-11-20 17:09:22.307874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:58.614 [2024-11-20 17:09:22.308154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.614 [2024-11-20 17:09:22.308210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.614 [2024-11-20 17:09:22.308229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.614 [2024-11-20 17:09:22.311204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.614 [2024-11-20 17:09:22.311464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.614 BaseBdev2 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 BaseBdev3_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 [2024-11-20 17:09:22.374510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:58.614 [2024-11-20 17:09:22.374592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.614 [2024-11-20 17:09:22.374622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.614 [2024-11-20 17:09:22.374640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.614 [2024-11-20 17:09:22.377486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.614 [2024-11-20 17:09:22.377700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.614 BaseBdev3 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 BaseBdev4_malloc 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.614 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.615 [2024-11-20 17:09:22.430358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:58.615 [2024-11-20 17:09:22.430430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.615 [2024-11-20 17:09:22.430459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:58.615 [2024-11-20 17:09:22.430481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.615 [2024-11-20 17:09:22.433458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.615 [2024-11-20 17:09:22.433514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:58.615 BaseBdev4 00:16:58.615 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.615 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:58.615 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.615 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.874 spare_malloc 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.874 spare_delay 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.874 [2024-11-20 17:09:22.502252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.874 [2024-11-20 17:09:22.502329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.874 [2024-11-20 17:09:22.502355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:58.874 [2024-11-20 17:09:22.502372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.874 [2024-11-20 17:09:22.505527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.874 [2024-11-20 17:09:22.505589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:58.874 spare 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.874 [2024-11-20 17:09:22.514339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.874 [2024-11-20 17:09:22.517032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.874 [2024-11-20 17:09:22.517154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.874 [2024-11-20 17:09:22.517248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:58.874 [2024-11-20 17:09:22.517560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.874 [2024-11-20 17:09:22.517595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.874 [2024-11-20 17:09:22.517955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:58.874 [2024-11-20 17:09:22.524921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.874 [2024-11-20 17:09:22.524952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.874 [2024-11-20 17:09:22.525250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.874 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.875 "name": "raid_bdev1", 00:16:58.875 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:16:58.875 "strip_size_kb": 64, 00:16:58.875 "state": "online", 00:16:58.875 "raid_level": "raid5f", 00:16:58.875 "superblock": true, 00:16:58.875 "num_base_bdevs": 4, 00:16:58.875 "num_base_bdevs_discovered": 4, 00:16:58.875 "num_base_bdevs_operational": 4, 00:16:58.875 "base_bdevs_list": [ 00:16:58.875 { 00:16:58.875 "name": "BaseBdev1", 00:16:58.875 "uuid": "3c0df90d-af6c-5c4b-a713-502013e2a05d", 00:16:58.875 "is_configured": true, 00:16:58.875 "data_offset": 2048, 00:16:58.875 "data_size": 63488 00:16:58.875 }, 00:16:58.875 { 00:16:58.875 "name": "BaseBdev2", 00:16:58.875 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:16:58.875 "is_configured": true, 00:16:58.875 "data_offset": 2048, 00:16:58.875 "data_size": 63488 00:16:58.875 }, 00:16:58.875 { 00:16:58.875 "name": "BaseBdev3", 00:16:58.875 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:16:58.875 "is_configured": true, 00:16:58.875 "data_offset": 2048, 00:16:58.875 "data_size": 63488 00:16:58.875 }, 00:16:58.875 { 00:16:58.875 "name": "BaseBdev4", 00:16:58.875 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:16:58.875 "is_configured": true, 00:16:58.875 "data_offset": 2048, 00:16:58.875 "data_size": 63488 00:16:58.875 } 00:16:58.875 ] 00:16:58.875 }' 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.875 17:09:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.442 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.443 [2024-11-20 17:09:23.037458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.443 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:59.702 [2024-11-20 17:09:23.357339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:59.702 /dev/nbd0 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.702 1+0 records in 00:16:59.702 1+0 records out 00:16:59.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387538 s, 10.6 MB/s 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:59.702 17:09:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:00.270 496+0 records in 00:17:00.270 496+0 records out 00:17:00.270 97517568 bytes (98 MB, 93 MiB) copied, 0.653515 s, 149 MB/s 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.270 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.837 [2024-11-20 17:09:24.399797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.837 [2024-11-20 17:09:24.427810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.837 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.837 "name": "raid_bdev1", 00:17:00.837 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:00.837 "strip_size_kb": 64, 00:17:00.837 "state": "online", 00:17:00.837 "raid_level": "raid5f", 00:17:00.837 "superblock": true, 00:17:00.837 "num_base_bdevs": 4, 00:17:00.837 "num_base_bdevs_discovered": 3, 00:17:00.837 "num_base_bdevs_operational": 3, 00:17:00.837 "base_bdevs_list": [ 00:17:00.837 { 00:17:00.837 "name": null, 00:17:00.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.837 "is_configured": false, 00:17:00.837 "data_offset": 0, 00:17:00.837 "data_size": 63488 00:17:00.837 }, 00:17:00.837 { 00:17:00.837 "name": "BaseBdev2", 00:17:00.837 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:00.837 "is_configured": true, 00:17:00.837 "data_offset": 2048, 00:17:00.837 "data_size": 63488 00:17:00.837 }, 00:17:00.837 { 00:17:00.838 "name": "BaseBdev3", 00:17:00.838 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:00.838 "is_configured": true, 00:17:00.838 "data_offset": 2048, 00:17:00.838 "data_size": 63488 00:17:00.838 }, 00:17:00.838 { 00:17:00.838 "name": "BaseBdev4", 00:17:00.838 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:00.838 "is_configured": true, 00:17:00.838 "data_offset": 2048, 00:17:00.838 "data_size": 63488 00:17:00.838 } 00:17:00.838 ] 00:17:00.838 }' 00:17:00.838 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.838 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.096 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.096 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.096 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.096 [2024-11-20 17:09:24.923958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.096 [2024-11-20 17:09:24.938512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:01.096 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.096 17:09:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:01.096 [2024-11-20 17:09:24.947928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.471 17:09:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.471 "name": "raid_bdev1", 00:17:02.471 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:02.471 "strip_size_kb": 64, 00:17:02.471 "state": "online", 00:17:02.471 "raid_level": "raid5f", 00:17:02.471 "superblock": true, 00:17:02.471 "num_base_bdevs": 4, 00:17:02.471 "num_base_bdevs_discovered": 4, 00:17:02.471 "num_base_bdevs_operational": 4, 00:17:02.471 "process": { 00:17:02.471 "type": "rebuild", 00:17:02.471 "target": "spare", 00:17:02.471 "progress": { 00:17:02.471 "blocks": 19200, 00:17:02.471 "percent": 10 00:17:02.471 } 00:17:02.471 }, 00:17:02.471 "base_bdevs_list": [ 00:17:02.471 { 00:17:02.471 "name": "spare", 00:17:02.471 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev2", 00:17:02.471 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev3", 00:17:02.471 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev4", 00:17:02.471 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 } 00:17:02.471 ] 00:17:02.471 }' 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.471 [2024-11-20 17:09:26.117368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.471 [2024-11-20 17:09:26.158871] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.471 [2024-11-20 17:09:26.158969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.471 [2024-11-20 17:09:26.158996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.471 [2024-11-20 17:09:26.159011] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.471 "name": "raid_bdev1", 00:17:02.471 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:02.471 "strip_size_kb": 64, 00:17:02.471 "state": "online", 00:17:02.471 "raid_level": "raid5f", 00:17:02.471 "superblock": true, 00:17:02.471 "num_base_bdevs": 4, 00:17:02.471 "num_base_bdevs_discovered": 3, 00:17:02.471 "num_base_bdevs_operational": 3, 00:17:02.471 "base_bdevs_list": [ 00:17:02.471 { 00:17:02.471 "name": null, 00:17:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.471 "is_configured": false, 00:17:02.471 "data_offset": 0, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev2", 00:17:02.471 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev3", 00:17:02.471 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev4", 00:17:02.471 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:02.471 "is_configured": true, 00:17:02.471 "data_offset": 2048, 00:17:02.471 "data_size": 63488 00:17:02.471 } 00:17:02.471 ] 00:17:02.471 }' 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.471 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.063 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.063 "name": "raid_bdev1", 00:17:03.063 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:03.063 "strip_size_kb": 64, 00:17:03.063 "state": "online", 00:17:03.063 "raid_level": "raid5f", 00:17:03.063 "superblock": true, 00:17:03.063 "num_base_bdevs": 4, 00:17:03.063 "num_base_bdevs_discovered": 3, 00:17:03.063 "num_base_bdevs_operational": 3, 00:17:03.063 "base_bdevs_list": [ 00:17:03.063 { 00:17:03.063 "name": null, 00:17:03.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.063 "is_configured": false, 00:17:03.063 "data_offset": 0, 00:17:03.063 "data_size": 63488 00:17:03.063 }, 00:17:03.063 { 00:17:03.063 "name": "BaseBdev2", 00:17:03.063 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:03.063 "is_configured": true, 00:17:03.063 "data_offset": 2048, 00:17:03.063 "data_size": 63488 00:17:03.063 }, 00:17:03.063 { 00:17:03.063 "name": "BaseBdev3", 00:17:03.063 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:03.063 "is_configured": true, 00:17:03.063 "data_offset": 2048, 00:17:03.063 "data_size": 63488 00:17:03.063 }, 00:17:03.063 { 00:17:03.063 "name": "BaseBdev4", 00:17:03.063 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:03.063 "is_configured": true, 00:17:03.063 "data_offset": 2048, 00:17:03.063 "data_size": 63488 00:17:03.063 } 00:17:03.063 ] 00:17:03.063 }' 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.064 [2024-11-20 17:09:26.846447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.064 [2024-11-20 17:09:26.860411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.064 17:09:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:03.064 [2024-11-20 17:09:26.869629] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.441 "name": "raid_bdev1", 00:17:04.441 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:04.441 "strip_size_kb": 64, 00:17:04.441 "state": "online", 00:17:04.441 "raid_level": "raid5f", 00:17:04.441 "superblock": true, 00:17:04.441 "num_base_bdevs": 4, 00:17:04.441 "num_base_bdevs_discovered": 4, 00:17:04.441 "num_base_bdevs_operational": 4, 00:17:04.441 "process": { 00:17:04.441 "type": "rebuild", 00:17:04.441 "target": "spare", 00:17:04.441 "progress": { 00:17:04.441 "blocks": 17280, 00:17:04.441 "percent": 9 00:17:04.441 } 00:17:04.441 }, 00:17:04.441 "base_bdevs_list": [ 00:17:04.441 { 00:17:04.441 "name": "spare", 00:17:04.441 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev2", 00:17:04.441 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev3", 00:17:04.441 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev4", 00:17:04.441 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 } 00:17:04.441 ] 00:17:04.441 }' 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.441 17:09:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:04.441 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=684 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.441 "name": "raid_bdev1", 00:17:04.441 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:04.441 "strip_size_kb": 64, 00:17:04.441 "state": "online", 00:17:04.441 "raid_level": "raid5f", 00:17:04.441 "superblock": true, 00:17:04.441 "num_base_bdevs": 4, 00:17:04.441 "num_base_bdevs_discovered": 4, 00:17:04.441 "num_base_bdevs_operational": 4, 00:17:04.441 "process": { 00:17:04.441 "type": "rebuild", 00:17:04.441 "target": "spare", 00:17:04.441 "progress": { 00:17:04.441 "blocks": 21120, 00:17:04.441 "percent": 11 00:17:04.441 } 00:17:04.441 }, 00:17:04.441 "base_bdevs_list": [ 00:17:04.441 { 00:17:04.441 "name": "spare", 00:17:04.441 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev2", 00:17:04.441 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev3", 00:17:04.441 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 }, 00:17:04.441 { 00:17:04.441 "name": "BaseBdev4", 00:17:04.441 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:04.441 "is_configured": true, 00:17:04.441 "data_offset": 2048, 00:17:04.441 "data_size": 63488 00:17:04.441 } 00:17:04.441 ] 00:17:04.441 }' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.441 17:09:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.377 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.636 "name": "raid_bdev1", 00:17:05.636 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:05.636 "strip_size_kb": 64, 00:17:05.636 "state": "online", 00:17:05.636 "raid_level": "raid5f", 00:17:05.636 "superblock": true, 00:17:05.636 "num_base_bdevs": 4, 00:17:05.636 "num_base_bdevs_discovered": 4, 00:17:05.636 "num_base_bdevs_operational": 4, 00:17:05.636 "process": { 00:17:05.636 "type": "rebuild", 00:17:05.636 "target": "spare", 00:17:05.636 "progress": { 00:17:05.636 "blocks": 44160, 00:17:05.636 "percent": 23 00:17:05.636 } 00:17:05.636 }, 00:17:05.636 "base_bdevs_list": [ 00:17:05.636 { 00:17:05.636 "name": "spare", 00:17:05.636 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev2", 00:17:05.636 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev3", 00:17:05.636 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 }, 00:17:05.636 { 00:17:05.636 "name": "BaseBdev4", 00:17:05.636 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:05.636 "is_configured": true, 00:17:05.636 "data_offset": 2048, 00:17:05.636 "data_size": 63488 00:17:05.636 } 00:17:05.636 ] 00:17:05.636 }' 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.636 17:09:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.830 "name": "raid_bdev1", 00:17:06.830 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:06.830 "strip_size_kb": 64, 00:17:06.830 "state": "online", 00:17:06.830 "raid_level": "raid5f", 00:17:06.830 "superblock": true, 00:17:06.830 "num_base_bdevs": 4, 00:17:06.830 "num_base_bdevs_discovered": 4, 00:17:06.830 "num_base_bdevs_operational": 4, 00:17:06.830 "process": { 00:17:06.830 "type": "rebuild", 00:17:06.830 "target": "spare", 00:17:06.830 "progress": { 00:17:06.830 "blocks": 65280, 00:17:06.830 "percent": 34 00:17:06.830 } 00:17:06.830 }, 00:17:06.830 "base_bdevs_list": [ 00:17:06.830 { 00:17:06.830 "name": "spare", 00:17:06.830 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:06.830 "is_configured": true, 00:17:06.830 "data_offset": 2048, 00:17:06.830 "data_size": 63488 00:17:06.830 }, 00:17:06.830 { 00:17:06.830 "name": "BaseBdev2", 00:17:06.830 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:06.830 "is_configured": true, 00:17:06.830 "data_offset": 2048, 00:17:06.830 "data_size": 63488 00:17:06.830 }, 00:17:06.830 { 00:17:06.830 "name": "BaseBdev3", 00:17:06.830 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:06.830 "is_configured": true, 00:17:06.830 "data_offset": 2048, 00:17:06.830 "data_size": 63488 00:17:06.830 }, 00:17:06.830 { 00:17:06.830 "name": "BaseBdev4", 00:17:06.830 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:06.830 "is_configured": true, 00:17:06.830 "data_offset": 2048, 00:17:06.830 "data_size": 63488 00:17:06.830 } 00:17:06.830 ] 00:17:06.830 }' 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.830 17:09:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.765 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.765 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.765 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.765 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.765 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.766 "name": "raid_bdev1", 00:17:07.766 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:07.766 "strip_size_kb": 64, 00:17:07.766 "state": "online", 00:17:07.766 "raid_level": "raid5f", 00:17:07.766 "superblock": true, 00:17:07.766 "num_base_bdevs": 4, 00:17:07.766 "num_base_bdevs_discovered": 4, 00:17:07.766 "num_base_bdevs_operational": 4, 00:17:07.766 "process": { 00:17:07.766 "type": "rebuild", 00:17:07.766 "target": "spare", 00:17:07.766 "progress": { 00:17:07.766 "blocks": 88320, 00:17:07.766 "percent": 46 00:17:07.766 } 00:17:07.766 }, 00:17:07.766 "base_bdevs_list": [ 00:17:07.766 { 00:17:07.766 "name": "spare", 00:17:07.766 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:07.766 "is_configured": true, 00:17:07.766 "data_offset": 2048, 00:17:07.766 "data_size": 63488 00:17:07.766 }, 00:17:07.766 { 00:17:07.766 "name": "BaseBdev2", 00:17:07.766 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:07.766 "is_configured": true, 00:17:07.766 "data_offset": 2048, 00:17:07.766 "data_size": 63488 00:17:07.766 }, 00:17:07.766 { 00:17:07.766 "name": "BaseBdev3", 00:17:07.766 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:07.766 "is_configured": true, 00:17:07.766 "data_offset": 2048, 00:17:07.766 "data_size": 63488 00:17:07.766 }, 00:17:07.766 { 00:17:07.766 "name": "BaseBdev4", 00:17:07.766 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:07.766 "is_configured": true, 00:17:07.766 "data_offset": 2048, 00:17:07.766 "data_size": 63488 00:17:07.766 } 00:17:07.766 ] 00:17:07.766 }' 00:17:07.766 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.024 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.024 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.024 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.024 17:09:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.959 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.959 "name": "raid_bdev1", 00:17:08.959 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:08.959 "strip_size_kb": 64, 00:17:08.959 "state": "online", 00:17:08.959 "raid_level": "raid5f", 00:17:08.959 "superblock": true, 00:17:08.959 "num_base_bdevs": 4, 00:17:08.959 "num_base_bdevs_discovered": 4, 00:17:08.959 "num_base_bdevs_operational": 4, 00:17:08.959 "process": { 00:17:08.959 "type": "rebuild", 00:17:08.959 "target": "spare", 00:17:08.959 "progress": { 00:17:08.959 "blocks": 111360, 00:17:08.959 "percent": 58 00:17:08.959 } 00:17:08.959 }, 00:17:08.959 "base_bdevs_list": [ 00:17:08.959 { 00:17:08.959 "name": "spare", 00:17:08.959 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:08.959 "is_configured": true, 00:17:08.959 "data_offset": 2048, 00:17:08.959 "data_size": 63488 00:17:08.959 }, 00:17:08.959 { 00:17:08.959 "name": "BaseBdev2", 00:17:08.959 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:08.959 "is_configured": true, 00:17:08.959 "data_offset": 2048, 00:17:08.959 "data_size": 63488 00:17:08.959 }, 00:17:08.959 { 00:17:08.959 "name": "BaseBdev3", 00:17:08.959 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:08.959 "is_configured": true, 00:17:08.959 "data_offset": 2048, 00:17:08.959 "data_size": 63488 00:17:08.959 }, 00:17:08.959 { 00:17:08.959 "name": "BaseBdev4", 00:17:08.959 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:08.959 "is_configured": true, 00:17:08.959 "data_offset": 2048, 00:17:08.959 "data_size": 63488 00:17:08.960 } 00:17:08.960 ] 00:17:08.960 }' 00:17:08.960 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.218 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.218 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.218 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.218 17:09:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.153 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.153 "name": "raid_bdev1", 00:17:10.153 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:10.153 "strip_size_kb": 64, 00:17:10.153 "state": "online", 00:17:10.153 "raid_level": "raid5f", 00:17:10.153 "superblock": true, 00:17:10.153 "num_base_bdevs": 4, 00:17:10.153 "num_base_bdevs_discovered": 4, 00:17:10.153 "num_base_bdevs_operational": 4, 00:17:10.153 "process": { 00:17:10.153 "type": "rebuild", 00:17:10.153 "target": "spare", 00:17:10.153 "progress": { 00:17:10.154 "blocks": 132480, 00:17:10.154 "percent": 69 00:17:10.154 } 00:17:10.154 }, 00:17:10.154 "base_bdevs_list": [ 00:17:10.154 { 00:17:10.154 "name": "spare", 00:17:10.154 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:10.154 "is_configured": true, 00:17:10.154 "data_offset": 2048, 00:17:10.154 "data_size": 63488 00:17:10.154 }, 00:17:10.154 { 00:17:10.154 "name": "BaseBdev2", 00:17:10.154 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:10.154 "is_configured": true, 00:17:10.154 "data_offset": 2048, 00:17:10.154 "data_size": 63488 00:17:10.154 }, 00:17:10.154 { 00:17:10.154 "name": "BaseBdev3", 00:17:10.154 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:10.154 "is_configured": true, 00:17:10.154 "data_offset": 2048, 00:17:10.154 "data_size": 63488 00:17:10.154 }, 00:17:10.154 { 00:17:10.154 "name": "BaseBdev4", 00:17:10.154 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:10.154 "is_configured": true, 00:17:10.154 "data_offset": 2048, 00:17:10.154 "data_size": 63488 00:17:10.154 } 00:17:10.154 ] 00:17:10.154 }' 00:17:10.154 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.154 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.154 17:09:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.412 17:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.413 17:09:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.349 "name": "raid_bdev1", 00:17:11.349 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:11.349 "strip_size_kb": 64, 00:17:11.349 "state": "online", 00:17:11.349 "raid_level": "raid5f", 00:17:11.349 "superblock": true, 00:17:11.349 "num_base_bdevs": 4, 00:17:11.349 "num_base_bdevs_discovered": 4, 00:17:11.349 "num_base_bdevs_operational": 4, 00:17:11.349 "process": { 00:17:11.349 "type": "rebuild", 00:17:11.349 "target": "spare", 00:17:11.349 "progress": { 00:17:11.349 "blocks": 155520, 00:17:11.349 "percent": 81 00:17:11.349 } 00:17:11.349 }, 00:17:11.349 "base_bdevs_list": [ 00:17:11.349 { 00:17:11.349 "name": "spare", 00:17:11.349 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:11.349 "is_configured": true, 00:17:11.349 "data_offset": 2048, 00:17:11.349 "data_size": 63488 00:17:11.349 }, 00:17:11.349 { 00:17:11.349 "name": "BaseBdev2", 00:17:11.349 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:11.349 "is_configured": true, 00:17:11.349 "data_offset": 2048, 00:17:11.349 "data_size": 63488 00:17:11.349 }, 00:17:11.349 { 00:17:11.349 "name": "BaseBdev3", 00:17:11.349 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:11.349 "is_configured": true, 00:17:11.349 "data_offset": 2048, 00:17:11.349 "data_size": 63488 00:17:11.349 }, 00:17:11.349 { 00:17:11.349 "name": "BaseBdev4", 00:17:11.349 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:11.349 "is_configured": true, 00:17:11.349 "data_offset": 2048, 00:17:11.349 "data_size": 63488 00:17:11.349 } 00:17:11.349 ] 00:17:11.349 }' 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.349 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.608 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.608 17:09:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.543 "name": "raid_bdev1", 00:17:12.543 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:12.543 "strip_size_kb": 64, 00:17:12.543 "state": "online", 00:17:12.543 "raid_level": "raid5f", 00:17:12.543 "superblock": true, 00:17:12.543 "num_base_bdevs": 4, 00:17:12.543 "num_base_bdevs_discovered": 4, 00:17:12.543 "num_base_bdevs_operational": 4, 00:17:12.543 "process": { 00:17:12.543 "type": "rebuild", 00:17:12.543 "target": "spare", 00:17:12.543 "progress": { 00:17:12.543 "blocks": 176640, 00:17:12.543 "percent": 92 00:17:12.543 } 00:17:12.543 }, 00:17:12.543 "base_bdevs_list": [ 00:17:12.543 { 00:17:12.543 "name": "spare", 00:17:12.543 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:12.543 "is_configured": true, 00:17:12.543 "data_offset": 2048, 00:17:12.543 "data_size": 63488 00:17:12.543 }, 00:17:12.543 { 00:17:12.543 "name": "BaseBdev2", 00:17:12.543 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:12.543 "is_configured": true, 00:17:12.543 "data_offset": 2048, 00:17:12.543 "data_size": 63488 00:17:12.543 }, 00:17:12.543 { 00:17:12.543 "name": "BaseBdev3", 00:17:12.543 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:12.543 "is_configured": true, 00:17:12.543 "data_offset": 2048, 00:17:12.543 "data_size": 63488 00:17:12.543 }, 00:17:12.543 { 00:17:12.543 "name": "BaseBdev4", 00:17:12.543 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:12.543 "is_configured": true, 00:17:12.543 "data_offset": 2048, 00:17:12.543 "data_size": 63488 00:17:12.543 } 00:17:12.543 ] 00:17:12.543 }' 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.543 17:09:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.110 [2024-11-20 17:09:36.958943] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:13.110 [2024-11-20 17:09:36.959057] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:13.110 [2024-11-20 17:09:36.959268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.713 "name": "raid_bdev1", 00:17:13.713 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:13.713 "strip_size_kb": 64, 00:17:13.713 "state": "online", 00:17:13.713 "raid_level": "raid5f", 00:17:13.713 "superblock": true, 00:17:13.713 "num_base_bdevs": 4, 00:17:13.713 "num_base_bdevs_discovered": 4, 00:17:13.713 "num_base_bdevs_operational": 4, 00:17:13.713 "base_bdevs_list": [ 00:17:13.713 { 00:17:13.713 "name": "spare", 00:17:13.713 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:13.713 "is_configured": true, 00:17:13.713 "data_offset": 2048, 00:17:13.713 "data_size": 63488 00:17:13.713 }, 00:17:13.713 { 00:17:13.713 "name": "BaseBdev2", 00:17:13.713 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:13.713 "is_configured": true, 00:17:13.713 "data_offset": 2048, 00:17:13.713 "data_size": 63488 00:17:13.713 }, 00:17:13.713 { 00:17:13.713 "name": "BaseBdev3", 00:17:13.713 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:13.713 "is_configured": true, 00:17:13.713 "data_offset": 2048, 00:17:13.713 "data_size": 63488 00:17:13.713 }, 00:17:13.713 { 00:17:13.713 "name": "BaseBdev4", 00:17:13.713 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:13.713 "is_configured": true, 00:17:13.713 "data_offset": 2048, 00:17:13.713 "data_size": 63488 00:17:13.713 } 00:17:13.713 ] 00:17:13.713 }' 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.713 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.971 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.971 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.971 "name": "raid_bdev1", 00:17:13.971 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:13.971 "strip_size_kb": 64, 00:17:13.971 "state": "online", 00:17:13.971 "raid_level": "raid5f", 00:17:13.971 "superblock": true, 00:17:13.971 "num_base_bdevs": 4, 00:17:13.971 "num_base_bdevs_discovered": 4, 00:17:13.971 "num_base_bdevs_operational": 4, 00:17:13.971 "base_bdevs_list": [ 00:17:13.971 { 00:17:13.971 "name": "spare", 00:17:13.971 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:13.971 "is_configured": true, 00:17:13.971 "data_offset": 2048, 00:17:13.971 "data_size": 63488 00:17:13.971 }, 00:17:13.971 { 00:17:13.971 "name": "BaseBdev2", 00:17:13.972 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 }, 00:17:13.972 { 00:17:13.972 "name": "BaseBdev3", 00:17:13.972 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 }, 00:17:13.972 { 00:17:13.972 "name": "BaseBdev4", 00:17:13.972 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 } 00:17:13.972 ] 00:17:13.972 }' 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.972 "name": "raid_bdev1", 00:17:13.972 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:13.972 "strip_size_kb": 64, 00:17:13.972 "state": "online", 00:17:13.972 "raid_level": "raid5f", 00:17:13.972 "superblock": true, 00:17:13.972 "num_base_bdevs": 4, 00:17:13.972 "num_base_bdevs_discovered": 4, 00:17:13.972 "num_base_bdevs_operational": 4, 00:17:13.972 "base_bdevs_list": [ 00:17:13.972 { 00:17:13.972 "name": "spare", 00:17:13.972 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 }, 00:17:13.972 { 00:17:13.972 "name": "BaseBdev2", 00:17:13.972 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 }, 00:17:13.972 { 00:17:13.972 "name": "BaseBdev3", 00:17:13.972 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 }, 00:17:13.972 { 00:17:13.972 "name": "BaseBdev4", 00:17:13.972 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:13.972 "is_configured": true, 00:17:13.972 "data_offset": 2048, 00:17:13.972 "data_size": 63488 00:17:13.972 } 00:17:13.972 ] 00:17:13.972 }' 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.972 17:09:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.539 [2024-11-20 17:09:38.257550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.539 [2024-11-20 17:09:38.257585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.539 [2024-11-20 17:09:38.257687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.539 [2024-11-20 17:09:38.257819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.539 [2024-11-20 17:09:38.257886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:14.539 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:14.798 /dev/nbd0 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.798 1+0 records in 00:17:14.798 1+0 records out 00:17:14.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272388 s, 15.0 MB/s 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.798 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:14.799 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:14.799 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.799 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:14.799 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:15.057 /dev/nbd1 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.057 1+0 records in 00:17:15.057 1+0 records out 00:17:15.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366834 s, 11.2 MB/s 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.057 17:09:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.316 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.574 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.832 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.833 [2024-11-20 17:09:39.669911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.833 [2024-11-20 17:09:39.669995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.833 [2024-11-20 17:09:39.670033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:15.833 [2024-11-20 17:09:39.670048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.833 [2024-11-20 17:09:39.672812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.833 [2024-11-20 17:09:39.673020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.833 [2024-11-20 17:09:39.673115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.833 [2024-11-20 17:09:39.673181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.833 [2024-11-20 17:09:39.673386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.833 [2024-11-20 17:09:39.673532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.833 [2024-11-20 17:09:39.673662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:15.833 spare 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.833 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.091 [2024-11-20 17:09:39.773763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:16.091 [2024-11-20 17:09:39.773824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.091 [2024-11-20 17:09:39.774173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:16.091 [2024-11-20 17:09:39.780703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:16.091 [2024-11-20 17:09:39.780924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:16.091 [2024-11-20 17:09:39.781171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.091 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.092 "name": "raid_bdev1", 00:17:16.092 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:16.092 "strip_size_kb": 64, 00:17:16.092 "state": "online", 00:17:16.092 "raid_level": "raid5f", 00:17:16.092 "superblock": true, 00:17:16.092 "num_base_bdevs": 4, 00:17:16.092 "num_base_bdevs_discovered": 4, 00:17:16.092 "num_base_bdevs_operational": 4, 00:17:16.092 "base_bdevs_list": [ 00:17:16.092 { 00:17:16.092 "name": "spare", 00:17:16.092 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev2", 00:17:16.092 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev3", 00:17:16.092 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev4", 00:17:16.092 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 } 00:17:16.092 ] 00:17:16.092 }' 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.092 17:09:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.658 "name": "raid_bdev1", 00:17:16.658 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:16.658 "strip_size_kb": 64, 00:17:16.658 "state": "online", 00:17:16.658 "raid_level": "raid5f", 00:17:16.658 "superblock": true, 00:17:16.658 "num_base_bdevs": 4, 00:17:16.658 "num_base_bdevs_discovered": 4, 00:17:16.658 "num_base_bdevs_operational": 4, 00:17:16.658 "base_bdevs_list": [ 00:17:16.658 { 00:17:16.658 "name": "spare", 00:17:16.658 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:16.658 "is_configured": true, 00:17:16.658 "data_offset": 2048, 00:17:16.658 "data_size": 63488 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "name": "BaseBdev2", 00:17:16.658 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:16.658 "is_configured": true, 00:17:16.658 "data_offset": 2048, 00:17:16.658 "data_size": 63488 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "name": "BaseBdev3", 00:17:16.658 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:16.658 "is_configured": true, 00:17:16.658 "data_offset": 2048, 00:17:16.658 "data_size": 63488 00:17:16.658 }, 00:17:16.658 { 00:17:16.658 "name": "BaseBdev4", 00:17:16.658 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:16.658 "is_configured": true, 00:17:16.658 "data_offset": 2048, 00:17:16.658 "data_size": 63488 00:17:16.658 } 00:17:16.658 ] 00:17:16.658 }' 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.658 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.659 [2024-11-20 17:09:40.509052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.659 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.917 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.917 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.917 "name": "raid_bdev1", 00:17:16.917 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:16.917 "strip_size_kb": 64, 00:17:16.917 "state": "online", 00:17:16.917 "raid_level": "raid5f", 00:17:16.917 "superblock": true, 00:17:16.917 "num_base_bdevs": 4, 00:17:16.917 "num_base_bdevs_discovered": 3, 00:17:16.917 "num_base_bdevs_operational": 3, 00:17:16.917 "base_bdevs_list": [ 00:17:16.917 { 00:17:16.917 "name": null, 00:17:16.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.917 "is_configured": false, 00:17:16.917 "data_offset": 0, 00:17:16.917 "data_size": 63488 00:17:16.917 }, 00:17:16.917 { 00:17:16.917 "name": "BaseBdev2", 00:17:16.917 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:16.917 "is_configured": true, 00:17:16.917 "data_offset": 2048, 00:17:16.917 "data_size": 63488 00:17:16.917 }, 00:17:16.917 { 00:17:16.917 "name": "BaseBdev3", 00:17:16.917 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:16.917 "is_configured": true, 00:17:16.917 "data_offset": 2048, 00:17:16.917 "data_size": 63488 00:17:16.917 }, 00:17:16.917 { 00:17:16.917 "name": "BaseBdev4", 00:17:16.917 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:16.917 "is_configured": true, 00:17:16.917 "data_offset": 2048, 00:17:16.917 "data_size": 63488 00:17:16.917 } 00:17:16.917 ] 00:17:16.917 }' 00:17:16.917 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.918 17:09:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.176 17:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.176 17:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.176 17:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.176 [2024-11-20 17:09:41.033331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.176 [2024-11-20 17:09:41.033603] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.176 [2024-11-20 17:09:41.033643] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:17.176 [2024-11-20 17:09:41.033717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.435 [2024-11-20 17:09:41.047902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:17.435 17:09:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.435 17:09:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:17.435 [2024-11-20 17:09:41.056891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.370 "name": "raid_bdev1", 00:17:18.370 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:18.370 "strip_size_kb": 64, 00:17:18.370 "state": "online", 00:17:18.370 "raid_level": "raid5f", 00:17:18.370 "superblock": true, 00:17:18.370 "num_base_bdevs": 4, 00:17:18.370 "num_base_bdevs_discovered": 4, 00:17:18.370 "num_base_bdevs_operational": 4, 00:17:18.370 "process": { 00:17:18.370 "type": "rebuild", 00:17:18.370 "target": "spare", 00:17:18.370 "progress": { 00:17:18.370 "blocks": 17280, 00:17:18.370 "percent": 9 00:17:18.370 } 00:17:18.370 }, 00:17:18.370 "base_bdevs_list": [ 00:17:18.370 { 00:17:18.370 "name": "spare", 00:17:18.370 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:18.370 "is_configured": true, 00:17:18.370 "data_offset": 2048, 00:17:18.370 "data_size": 63488 00:17:18.370 }, 00:17:18.370 { 00:17:18.370 "name": "BaseBdev2", 00:17:18.370 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:18.370 "is_configured": true, 00:17:18.370 "data_offset": 2048, 00:17:18.370 "data_size": 63488 00:17:18.370 }, 00:17:18.370 { 00:17:18.370 "name": "BaseBdev3", 00:17:18.370 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:18.370 "is_configured": true, 00:17:18.370 "data_offset": 2048, 00:17:18.370 "data_size": 63488 00:17:18.370 }, 00:17:18.370 { 00:17:18.370 "name": "BaseBdev4", 00:17:18.370 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:18.370 "is_configured": true, 00:17:18.370 "data_offset": 2048, 00:17:18.370 "data_size": 63488 00:17:18.370 } 00:17:18.370 ] 00:17:18.370 }' 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.370 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.370 [2024-11-20 17:09:42.218367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.629 [2024-11-20 17:09:42.269198] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.629 [2024-11-20 17:09:42.269284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.629 [2024-11-20 17:09:42.269311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.629 [2024-11-20 17:09:42.269336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.629 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.629 "name": "raid_bdev1", 00:17:18.629 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:18.629 "strip_size_kb": 64, 00:17:18.629 "state": "online", 00:17:18.629 "raid_level": "raid5f", 00:17:18.629 "superblock": true, 00:17:18.629 "num_base_bdevs": 4, 00:17:18.629 "num_base_bdevs_discovered": 3, 00:17:18.629 "num_base_bdevs_operational": 3, 00:17:18.629 "base_bdevs_list": [ 00:17:18.629 { 00:17:18.629 "name": null, 00:17:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.629 "is_configured": false, 00:17:18.629 "data_offset": 0, 00:17:18.629 "data_size": 63488 00:17:18.629 }, 00:17:18.629 { 00:17:18.629 "name": "BaseBdev2", 00:17:18.629 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:18.629 "is_configured": true, 00:17:18.629 "data_offset": 2048, 00:17:18.629 "data_size": 63488 00:17:18.629 }, 00:17:18.629 { 00:17:18.629 "name": "BaseBdev3", 00:17:18.629 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:18.629 "is_configured": true, 00:17:18.629 "data_offset": 2048, 00:17:18.629 "data_size": 63488 00:17:18.629 }, 00:17:18.629 { 00:17:18.629 "name": "BaseBdev4", 00:17:18.629 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:18.629 "is_configured": true, 00:17:18.630 "data_offset": 2048, 00:17:18.630 "data_size": 63488 00:17:18.630 } 00:17:18.630 ] 00:17:18.630 }' 00:17:18.630 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.630 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.197 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:19.197 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.197 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.197 [2024-11-20 17:09:42.825016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:19.197 [2024-11-20 17:09:42.825344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.197 [2024-11-20 17:09:42.825400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:19.197 [2024-11-20 17:09:42.825443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.197 [2024-11-20 17:09:42.826196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.197 [2024-11-20 17:09:42.826245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:19.197 [2024-11-20 17:09:42.826368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:19.197 [2024-11-20 17:09:42.826405] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.197 [2024-11-20 17:09:42.826419] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.197 [2024-11-20 17:09:42.826466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.197 [2024-11-20 17:09:42.841651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:19.197 spare 00:17:19.197 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.197 17:09:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:19.197 [2024-11-20 17:09:42.850868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.132 "name": "raid_bdev1", 00:17:20.132 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:20.132 "strip_size_kb": 64, 00:17:20.132 "state": "online", 00:17:20.132 "raid_level": "raid5f", 00:17:20.132 "superblock": true, 00:17:20.132 "num_base_bdevs": 4, 00:17:20.132 "num_base_bdevs_discovered": 4, 00:17:20.132 "num_base_bdevs_operational": 4, 00:17:20.132 "process": { 00:17:20.132 "type": "rebuild", 00:17:20.132 "target": "spare", 00:17:20.132 "progress": { 00:17:20.132 "blocks": 17280, 00:17:20.132 "percent": 9 00:17:20.132 } 00:17:20.132 }, 00:17:20.132 "base_bdevs_list": [ 00:17:20.132 { 00:17:20.132 "name": "spare", 00:17:20.132 "uuid": "b04ca50b-e7b3-5ee7-a5c6-1b25f9d2fa65", 00:17:20.132 "is_configured": true, 00:17:20.132 "data_offset": 2048, 00:17:20.132 "data_size": 63488 00:17:20.132 }, 00:17:20.132 { 00:17:20.132 "name": "BaseBdev2", 00:17:20.132 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:20.132 "is_configured": true, 00:17:20.132 "data_offset": 2048, 00:17:20.132 "data_size": 63488 00:17:20.132 }, 00:17:20.132 { 00:17:20.132 "name": "BaseBdev3", 00:17:20.132 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:20.132 "is_configured": true, 00:17:20.132 "data_offset": 2048, 00:17:20.132 "data_size": 63488 00:17:20.132 }, 00:17:20.132 { 00:17:20.132 "name": "BaseBdev4", 00:17:20.132 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:20.132 "is_configured": true, 00:17:20.132 "data_offset": 2048, 00:17:20.132 "data_size": 63488 00:17:20.132 } 00:17:20.132 ] 00:17:20.132 }' 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.132 17:09:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 [2024-11-20 17:09:44.016645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.391 [2024-11-20 17:09:44.062272] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:20.391 [2024-11-20 17:09:44.062355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.391 [2024-11-20 17:09:44.062384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:20.391 [2024-11-20 17:09:44.062396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.391 "name": "raid_bdev1", 00:17:20.391 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:20.391 "strip_size_kb": 64, 00:17:20.391 "state": "online", 00:17:20.391 "raid_level": "raid5f", 00:17:20.391 "superblock": true, 00:17:20.391 "num_base_bdevs": 4, 00:17:20.391 "num_base_bdevs_discovered": 3, 00:17:20.391 "num_base_bdevs_operational": 3, 00:17:20.391 "base_bdevs_list": [ 00:17:20.391 { 00:17:20.391 "name": null, 00:17:20.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.391 "is_configured": false, 00:17:20.391 "data_offset": 0, 00:17:20.391 "data_size": 63488 00:17:20.391 }, 00:17:20.391 { 00:17:20.391 "name": "BaseBdev2", 00:17:20.391 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:20.391 "is_configured": true, 00:17:20.391 "data_offset": 2048, 00:17:20.391 "data_size": 63488 00:17:20.391 }, 00:17:20.391 { 00:17:20.391 "name": "BaseBdev3", 00:17:20.391 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:20.391 "is_configured": true, 00:17:20.391 "data_offset": 2048, 00:17:20.391 "data_size": 63488 00:17:20.391 }, 00:17:20.391 { 00:17:20.391 "name": "BaseBdev4", 00:17:20.391 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:20.391 "is_configured": true, 00:17:20.391 "data_offset": 2048, 00:17:20.391 "data_size": 63488 00:17:20.391 } 00:17:20.391 ] 00:17:20.391 }' 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.391 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.958 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.959 "name": "raid_bdev1", 00:17:20.959 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:20.959 "strip_size_kb": 64, 00:17:20.959 "state": "online", 00:17:20.959 "raid_level": "raid5f", 00:17:20.959 "superblock": true, 00:17:20.959 "num_base_bdevs": 4, 00:17:20.959 "num_base_bdevs_discovered": 3, 00:17:20.959 "num_base_bdevs_operational": 3, 00:17:20.959 "base_bdevs_list": [ 00:17:20.959 { 00:17:20.959 "name": null, 00:17:20.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.959 "is_configured": false, 00:17:20.959 "data_offset": 0, 00:17:20.959 "data_size": 63488 00:17:20.959 }, 00:17:20.959 { 00:17:20.959 "name": "BaseBdev2", 00:17:20.959 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:20.959 "is_configured": true, 00:17:20.959 "data_offset": 2048, 00:17:20.959 "data_size": 63488 00:17:20.959 }, 00:17:20.959 { 00:17:20.959 "name": "BaseBdev3", 00:17:20.959 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:20.959 "is_configured": true, 00:17:20.959 "data_offset": 2048, 00:17:20.959 "data_size": 63488 00:17:20.959 }, 00:17:20.959 { 00:17:20.959 "name": "BaseBdev4", 00:17:20.959 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:20.959 "is_configured": true, 00:17:20.959 "data_offset": 2048, 00:17:20.959 "data_size": 63488 00:17:20.959 } 00:17:20.959 ] 00:17:20.959 }' 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.959 [2024-11-20 17:09:44.774177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.959 [2024-11-20 17:09:44.774269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.959 [2024-11-20 17:09:44.774302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:20.959 [2024-11-20 17:09:44.774318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.959 [2024-11-20 17:09:44.774945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.959 [2024-11-20 17:09:44.774975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.959 [2024-11-20 17:09:44.775072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:20.959 [2024-11-20 17:09:44.775109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.959 [2024-11-20 17:09:44.775160] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.959 [2024-11-20 17:09:44.775206] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:20.959 BaseBdev1 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.959 17:09:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.334 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.334 "name": "raid_bdev1", 00:17:22.334 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:22.334 "strip_size_kb": 64, 00:17:22.334 "state": "online", 00:17:22.334 "raid_level": "raid5f", 00:17:22.334 "superblock": true, 00:17:22.334 "num_base_bdevs": 4, 00:17:22.334 "num_base_bdevs_discovered": 3, 00:17:22.334 "num_base_bdevs_operational": 3, 00:17:22.334 "base_bdevs_list": [ 00:17:22.334 { 00:17:22.334 "name": null, 00:17:22.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.334 "is_configured": false, 00:17:22.334 "data_offset": 0, 00:17:22.334 "data_size": 63488 00:17:22.334 }, 00:17:22.334 { 00:17:22.334 "name": "BaseBdev2", 00:17:22.334 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:22.334 "is_configured": true, 00:17:22.334 "data_offset": 2048, 00:17:22.334 "data_size": 63488 00:17:22.334 }, 00:17:22.334 { 00:17:22.334 "name": "BaseBdev3", 00:17:22.334 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:22.334 "is_configured": true, 00:17:22.334 "data_offset": 2048, 00:17:22.334 "data_size": 63488 00:17:22.334 }, 00:17:22.334 { 00:17:22.334 "name": "BaseBdev4", 00:17:22.334 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:22.334 "is_configured": true, 00:17:22.335 "data_offset": 2048, 00:17:22.335 "data_size": 63488 00:17:22.335 } 00:17:22.335 ] 00:17:22.335 }' 00:17:22.335 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.335 17:09:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.594 "name": "raid_bdev1", 00:17:22.594 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:22.594 "strip_size_kb": 64, 00:17:22.594 "state": "online", 00:17:22.594 "raid_level": "raid5f", 00:17:22.594 "superblock": true, 00:17:22.594 "num_base_bdevs": 4, 00:17:22.594 "num_base_bdevs_discovered": 3, 00:17:22.594 "num_base_bdevs_operational": 3, 00:17:22.594 "base_bdevs_list": [ 00:17:22.594 { 00:17:22.594 "name": null, 00:17:22.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.594 "is_configured": false, 00:17:22.594 "data_offset": 0, 00:17:22.594 "data_size": 63488 00:17:22.594 }, 00:17:22.594 { 00:17:22.594 "name": "BaseBdev2", 00:17:22.594 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:22.594 "is_configured": true, 00:17:22.594 "data_offset": 2048, 00:17:22.594 "data_size": 63488 00:17:22.594 }, 00:17:22.594 { 00:17:22.594 "name": "BaseBdev3", 00:17:22.594 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:22.594 "is_configured": true, 00:17:22.594 "data_offset": 2048, 00:17:22.594 "data_size": 63488 00:17:22.594 }, 00:17:22.594 { 00:17:22.594 "name": "BaseBdev4", 00:17:22.594 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:22.594 "is_configured": true, 00:17:22.594 "data_offset": 2048, 00:17:22.594 "data_size": 63488 00:17:22.594 } 00:17:22.594 ] 00:17:22.594 }' 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.594 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.853 [2024-11-20 17:09:46.490749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.853 [2024-11-20 17:09:46.491026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.853 [2024-11-20 17:09:46.491065] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:22.853 request: 00:17:22.853 { 00:17:22.853 "base_bdev": "BaseBdev1", 00:17:22.853 "raid_bdev": "raid_bdev1", 00:17:22.853 "method": "bdev_raid_add_base_bdev", 00:17:22.853 "req_id": 1 00:17:22.853 } 00:17:22.853 Got JSON-RPC error response 00:17:22.853 response: 00:17:22.853 { 00:17:22.853 "code": -22, 00:17:22.853 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:22.853 } 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.853 17:09:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.792 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.792 "name": "raid_bdev1", 00:17:23.792 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:23.792 "strip_size_kb": 64, 00:17:23.792 "state": "online", 00:17:23.792 "raid_level": "raid5f", 00:17:23.792 "superblock": true, 00:17:23.792 "num_base_bdevs": 4, 00:17:23.793 "num_base_bdevs_discovered": 3, 00:17:23.793 "num_base_bdevs_operational": 3, 00:17:23.793 "base_bdevs_list": [ 00:17:23.793 { 00:17:23.793 "name": null, 00:17:23.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.793 "is_configured": false, 00:17:23.793 "data_offset": 0, 00:17:23.793 "data_size": 63488 00:17:23.793 }, 00:17:23.793 { 00:17:23.793 "name": "BaseBdev2", 00:17:23.793 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:23.793 "is_configured": true, 00:17:23.793 "data_offset": 2048, 00:17:23.793 "data_size": 63488 00:17:23.793 }, 00:17:23.793 { 00:17:23.793 "name": "BaseBdev3", 00:17:23.793 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:23.793 "is_configured": true, 00:17:23.793 "data_offset": 2048, 00:17:23.793 "data_size": 63488 00:17:23.793 }, 00:17:23.793 { 00:17:23.793 "name": "BaseBdev4", 00:17:23.793 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:23.793 "is_configured": true, 00:17:23.793 "data_offset": 2048, 00:17:23.793 "data_size": 63488 00:17:23.793 } 00:17:23.793 ] 00:17:23.793 }' 00:17:23.793 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.793 17:09:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.360 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.361 "name": "raid_bdev1", 00:17:24.361 "uuid": "f6655f9e-efb8-4a35-933c-ec5ab69f344a", 00:17:24.361 "strip_size_kb": 64, 00:17:24.361 "state": "online", 00:17:24.361 "raid_level": "raid5f", 00:17:24.361 "superblock": true, 00:17:24.361 "num_base_bdevs": 4, 00:17:24.361 "num_base_bdevs_discovered": 3, 00:17:24.361 "num_base_bdevs_operational": 3, 00:17:24.361 "base_bdevs_list": [ 00:17:24.361 { 00:17:24.361 "name": null, 00:17:24.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.361 "is_configured": false, 00:17:24.361 "data_offset": 0, 00:17:24.361 "data_size": 63488 00:17:24.361 }, 00:17:24.361 { 00:17:24.361 "name": "BaseBdev2", 00:17:24.361 "uuid": "453a6fbe-9a8c-50eb-8bc0-f51ac9f161ef", 00:17:24.361 "is_configured": true, 00:17:24.361 "data_offset": 2048, 00:17:24.361 "data_size": 63488 00:17:24.361 }, 00:17:24.361 { 00:17:24.361 "name": "BaseBdev3", 00:17:24.361 "uuid": "a07edf16-2c94-5b97-901d-3492cf37ecf5", 00:17:24.361 "is_configured": true, 00:17:24.361 "data_offset": 2048, 00:17:24.361 "data_size": 63488 00:17:24.361 }, 00:17:24.361 { 00:17:24.361 "name": "BaseBdev4", 00:17:24.361 "uuid": "b04bbc87-b839-5968-825b-d604c98773e5", 00:17:24.361 "is_configured": true, 00:17:24.361 "data_offset": 2048, 00:17:24.361 "data_size": 63488 00:17:24.361 } 00:17:24.361 ] 00:17:24.361 }' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85293 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85293 ']' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85293 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85293 00:17:24.361 killing process with pid 85293 00:17:24.361 Received shutdown signal, test time was about 60.000000 seconds 00:17:24.361 00:17:24.361 Latency(us) 00:17:24.361 [2024-11-20T17:09:48.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.361 [2024-11-20T17:09:48.230Z] =================================================================================================================== 00:17:24.361 [2024-11-20T17:09:48.230Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85293' 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85293 00:17:24.361 [2024-11-20 17:09:48.212820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.361 17:09:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85293 00:17:24.361 [2024-11-20 17:09:48.212988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.361 [2024-11-20 17:09:48.213083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.361 [2024-11-20 17:09:48.213120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:24.930 [2024-11-20 17:09:48.638873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.865 ************************************ 00:17:25.865 END TEST raid5f_rebuild_test_sb 00:17:25.865 ************************************ 00:17:25.865 17:09:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:25.865 00:17:25.865 real 0m28.564s 00:17:25.865 user 0m37.097s 00:17:25.865 sys 0m2.930s 00:17:25.865 17:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.865 17:09:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.865 17:09:49 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:25.865 17:09:49 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:25.865 17:09:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:25.865 17:09:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.865 17:09:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.865 ************************************ 00:17:25.865 START TEST raid_state_function_test_sb_4k 00:17:25.865 ************************************ 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:25.865 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:26.125 Process raid pid: 86116 00:17:26.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86116 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86116' 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86116 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86116 ']' 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.125 17:09:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.125 [2024-11-20 17:09:49.843422] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:17:26.125 [2024-11-20 17:09:49.843963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.384 [2024-11-20 17:09:50.034571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.384 [2024-11-20 17:09:50.171502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.642 [2024-11-20 17:09:50.380441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.642 [2024-11-20 17:09:50.380746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.209 [2024-11-20 17:09:50.814847] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.209 [2024-11-20 17:09:50.815112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.209 [2024-11-20 17:09:50.815269] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.209 [2024-11-20 17:09:50.815331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.209 "name": "Existed_Raid", 00:17:27.209 "uuid": "0ead53ed-3528-4862-9c33-65bea86b7778", 00:17:27.209 "strip_size_kb": 0, 00:17:27.209 "state": "configuring", 00:17:27.209 "raid_level": "raid1", 00:17:27.209 "superblock": true, 00:17:27.209 "num_base_bdevs": 2, 00:17:27.209 "num_base_bdevs_discovered": 0, 00:17:27.209 "num_base_bdevs_operational": 2, 00:17:27.209 "base_bdevs_list": [ 00:17:27.209 { 00:17:27.209 "name": "BaseBdev1", 00:17:27.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.209 "is_configured": false, 00:17:27.209 "data_offset": 0, 00:17:27.209 "data_size": 0 00:17:27.209 }, 00:17:27.209 { 00:17:27.209 "name": "BaseBdev2", 00:17:27.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.209 "is_configured": false, 00:17:27.209 "data_offset": 0, 00:17:27.209 "data_size": 0 00:17:27.209 } 00:17:27.209 ] 00:17:27.209 }' 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.209 17:09:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 [2024-11-20 17:09:51.350960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.776 [2024-11-20 17:09:51.351011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 [2024-11-20 17:09:51.363007] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.776 [2024-11-20 17:09:51.363180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.776 [2024-11-20 17:09:51.363207] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.776 [2024-11-20 17:09:51.363228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 [2024-11-20 17:09:51.411521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.776 BaseBdev1 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 [ 00:17:27.776 { 00:17:27.776 "name": "BaseBdev1", 00:17:27.776 "aliases": [ 00:17:27.776 "e1f91c06-dcce-4a77-8e8e-4550d0cff84b" 00:17:27.776 ], 00:17:27.776 "product_name": "Malloc disk", 00:17:27.776 "block_size": 4096, 00:17:27.776 "num_blocks": 8192, 00:17:27.776 "uuid": "e1f91c06-dcce-4a77-8e8e-4550d0cff84b", 00:17:27.776 "assigned_rate_limits": { 00:17:27.776 "rw_ios_per_sec": 0, 00:17:27.776 "rw_mbytes_per_sec": 0, 00:17:27.776 "r_mbytes_per_sec": 0, 00:17:27.776 "w_mbytes_per_sec": 0 00:17:27.776 }, 00:17:27.776 "claimed": true, 00:17:27.776 "claim_type": "exclusive_write", 00:17:27.776 "zoned": false, 00:17:27.776 "supported_io_types": { 00:17:27.776 "read": true, 00:17:27.776 "write": true, 00:17:27.776 "unmap": true, 00:17:27.776 "flush": true, 00:17:27.776 "reset": true, 00:17:27.776 "nvme_admin": false, 00:17:27.776 "nvme_io": false, 00:17:27.776 "nvme_io_md": false, 00:17:27.776 "write_zeroes": true, 00:17:27.776 "zcopy": true, 00:17:27.776 "get_zone_info": false, 00:17:27.776 "zone_management": false, 00:17:27.776 "zone_append": false, 00:17:27.776 "compare": false, 00:17:27.776 "compare_and_write": false, 00:17:27.776 "abort": true, 00:17:27.776 "seek_hole": false, 00:17:27.776 "seek_data": false, 00:17:27.776 "copy": true, 00:17:27.776 "nvme_iov_md": false 00:17:27.776 }, 00:17:27.776 "memory_domains": [ 00:17:27.776 { 00:17:27.776 "dma_device_id": "system", 00:17:27.776 "dma_device_type": 1 00:17:27.776 }, 00:17:27.776 { 00:17:27.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.776 "dma_device_type": 2 00:17:27.776 } 00:17:27.776 ], 00:17:27.776 "driver_specific": {} 00:17:27.776 } 00:17:27.776 ] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.776 "name": "Existed_Raid", 00:17:27.776 "uuid": "4c76f394-9e4a-4cf1-8a3d-75014af98113", 00:17:27.776 "strip_size_kb": 0, 00:17:27.776 "state": "configuring", 00:17:27.776 "raid_level": "raid1", 00:17:27.776 "superblock": true, 00:17:27.776 "num_base_bdevs": 2, 00:17:27.776 "num_base_bdevs_discovered": 1, 00:17:27.776 "num_base_bdevs_operational": 2, 00:17:27.776 "base_bdevs_list": [ 00:17:27.776 { 00:17:27.776 "name": "BaseBdev1", 00:17:27.776 "uuid": "e1f91c06-dcce-4a77-8e8e-4550d0cff84b", 00:17:27.776 "is_configured": true, 00:17:27.776 "data_offset": 256, 00:17:27.776 "data_size": 7936 00:17:27.776 }, 00:17:27.776 { 00:17:27.776 "name": "BaseBdev2", 00:17:27.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.776 "is_configured": false, 00:17:27.776 "data_offset": 0, 00:17:27.776 "data_size": 0 00:17:27.776 } 00:17:27.776 ] 00:17:27.776 }' 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.776 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.343 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 [2024-11-20 17:09:51.967796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.344 [2024-11-20 17:09:51.967846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 [2024-11-20 17:09:51.979870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.344 [2024-11-20 17:09:51.982573] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.344 [2024-11-20 17:09:51.982644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.344 17:09:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.344 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.344 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.344 "name": "Existed_Raid", 00:17:28.344 "uuid": "b2a01e70-7e67-47e0-ba7f-7023bd27f47e", 00:17:28.344 "strip_size_kb": 0, 00:17:28.344 "state": "configuring", 00:17:28.344 "raid_level": "raid1", 00:17:28.344 "superblock": true, 00:17:28.344 "num_base_bdevs": 2, 00:17:28.344 "num_base_bdevs_discovered": 1, 00:17:28.344 "num_base_bdevs_operational": 2, 00:17:28.344 "base_bdevs_list": [ 00:17:28.344 { 00:17:28.344 "name": "BaseBdev1", 00:17:28.344 "uuid": "e1f91c06-dcce-4a77-8e8e-4550d0cff84b", 00:17:28.344 "is_configured": true, 00:17:28.344 "data_offset": 256, 00:17:28.344 "data_size": 7936 00:17:28.344 }, 00:17:28.344 { 00:17:28.344 "name": "BaseBdev2", 00:17:28.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.344 "is_configured": false, 00:17:28.344 "data_offset": 0, 00:17:28.344 "data_size": 0 00:17:28.344 } 00:17:28.344 ] 00:17:28.344 }' 00:17:28.344 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.344 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 [2024-11-20 17:09:52.526577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.911 [2024-11-20 17:09:52.526959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:28.911 [2024-11-20 17:09:52.527003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.911 [2024-11-20 17:09:52.527373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:28.911 [2024-11-20 17:09:52.527679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:28.911 [2024-11-20 17:09:52.527702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:28.911 [2024-11-20 17:09:52.527911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.911 BaseBdev2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 [ 00:17:28.911 { 00:17:28.911 "name": "BaseBdev2", 00:17:28.911 "aliases": [ 00:17:28.911 "e3427090-97fc-4422-8f9e-1b39646d96fd" 00:17:28.911 ], 00:17:28.911 "product_name": "Malloc disk", 00:17:28.911 "block_size": 4096, 00:17:28.911 "num_blocks": 8192, 00:17:28.911 "uuid": "e3427090-97fc-4422-8f9e-1b39646d96fd", 00:17:28.911 "assigned_rate_limits": { 00:17:28.911 "rw_ios_per_sec": 0, 00:17:28.911 "rw_mbytes_per_sec": 0, 00:17:28.911 "r_mbytes_per_sec": 0, 00:17:28.911 "w_mbytes_per_sec": 0 00:17:28.911 }, 00:17:28.911 "claimed": true, 00:17:28.911 "claim_type": "exclusive_write", 00:17:28.911 "zoned": false, 00:17:28.911 "supported_io_types": { 00:17:28.911 "read": true, 00:17:28.911 "write": true, 00:17:28.911 "unmap": true, 00:17:28.911 "flush": true, 00:17:28.911 "reset": true, 00:17:28.911 "nvme_admin": false, 00:17:28.911 "nvme_io": false, 00:17:28.911 "nvme_io_md": false, 00:17:28.911 "write_zeroes": true, 00:17:28.911 "zcopy": true, 00:17:28.911 "get_zone_info": false, 00:17:28.911 "zone_management": false, 00:17:28.911 "zone_append": false, 00:17:28.911 "compare": false, 00:17:28.911 "compare_and_write": false, 00:17:28.911 "abort": true, 00:17:28.911 "seek_hole": false, 00:17:28.911 "seek_data": false, 00:17:28.911 "copy": true, 00:17:28.911 "nvme_iov_md": false 00:17:28.911 }, 00:17:28.911 "memory_domains": [ 00:17:28.911 { 00:17:28.911 "dma_device_id": "system", 00:17:28.911 "dma_device_type": 1 00:17:28.911 }, 00:17:28.911 { 00:17:28.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.911 "dma_device_type": 2 00:17:28.911 } 00:17:28.911 ], 00:17:28.911 "driver_specific": {} 00:17:28.911 } 00:17:28.911 ] 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.911 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.912 "name": "Existed_Raid", 00:17:28.912 "uuid": "b2a01e70-7e67-47e0-ba7f-7023bd27f47e", 00:17:28.912 "strip_size_kb": 0, 00:17:28.912 "state": "online", 00:17:28.912 "raid_level": "raid1", 00:17:28.912 "superblock": true, 00:17:28.912 "num_base_bdevs": 2, 00:17:28.912 "num_base_bdevs_discovered": 2, 00:17:28.912 "num_base_bdevs_operational": 2, 00:17:28.912 "base_bdevs_list": [ 00:17:28.912 { 00:17:28.912 "name": "BaseBdev1", 00:17:28.912 "uuid": "e1f91c06-dcce-4a77-8e8e-4550d0cff84b", 00:17:28.912 "is_configured": true, 00:17:28.912 "data_offset": 256, 00:17:28.912 "data_size": 7936 00:17:28.912 }, 00:17:28.912 { 00:17:28.912 "name": "BaseBdev2", 00:17:28.912 "uuid": "e3427090-97fc-4422-8f9e-1b39646d96fd", 00:17:28.912 "is_configured": true, 00:17:28.912 "data_offset": 256, 00:17:28.912 "data_size": 7936 00:17:28.912 } 00:17:28.912 ] 00:17:28.912 }' 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.912 17:09:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.479 [2024-11-20 17:09:53.099248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.479 "name": "Existed_Raid", 00:17:29.479 "aliases": [ 00:17:29.479 "b2a01e70-7e67-47e0-ba7f-7023bd27f47e" 00:17:29.479 ], 00:17:29.479 "product_name": "Raid Volume", 00:17:29.479 "block_size": 4096, 00:17:29.479 "num_blocks": 7936, 00:17:29.479 "uuid": "b2a01e70-7e67-47e0-ba7f-7023bd27f47e", 00:17:29.479 "assigned_rate_limits": { 00:17:29.479 "rw_ios_per_sec": 0, 00:17:29.479 "rw_mbytes_per_sec": 0, 00:17:29.479 "r_mbytes_per_sec": 0, 00:17:29.479 "w_mbytes_per_sec": 0 00:17:29.479 }, 00:17:29.479 "claimed": false, 00:17:29.479 "zoned": false, 00:17:29.479 "supported_io_types": { 00:17:29.479 "read": true, 00:17:29.479 "write": true, 00:17:29.479 "unmap": false, 00:17:29.479 "flush": false, 00:17:29.479 "reset": true, 00:17:29.479 "nvme_admin": false, 00:17:29.479 "nvme_io": false, 00:17:29.479 "nvme_io_md": false, 00:17:29.479 "write_zeroes": true, 00:17:29.479 "zcopy": false, 00:17:29.479 "get_zone_info": false, 00:17:29.479 "zone_management": false, 00:17:29.479 "zone_append": false, 00:17:29.479 "compare": false, 00:17:29.479 "compare_and_write": false, 00:17:29.479 "abort": false, 00:17:29.479 "seek_hole": false, 00:17:29.479 "seek_data": false, 00:17:29.479 "copy": false, 00:17:29.479 "nvme_iov_md": false 00:17:29.479 }, 00:17:29.479 "memory_domains": [ 00:17:29.479 { 00:17:29.479 "dma_device_id": "system", 00:17:29.479 "dma_device_type": 1 00:17:29.479 }, 00:17:29.479 { 00:17:29.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.479 "dma_device_type": 2 00:17:29.479 }, 00:17:29.479 { 00:17:29.479 "dma_device_id": "system", 00:17:29.479 "dma_device_type": 1 00:17:29.479 }, 00:17:29.479 { 00:17:29.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.479 "dma_device_type": 2 00:17:29.479 } 00:17:29.479 ], 00:17:29.479 "driver_specific": { 00:17:29.479 "raid": { 00:17:29.479 "uuid": "b2a01e70-7e67-47e0-ba7f-7023bd27f47e", 00:17:29.479 "strip_size_kb": 0, 00:17:29.479 "state": "online", 00:17:29.479 "raid_level": "raid1", 00:17:29.479 "superblock": true, 00:17:29.479 "num_base_bdevs": 2, 00:17:29.479 "num_base_bdevs_discovered": 2, 00:17:29.479 "num_base_bdevs_operational": 2, 00:17:29.479 "base_bdevs_list": [ 00:17:29.479 { 00:17:29.479 "name": "BaseBdev1", 00:17:29.479 "uuid": "e1f91c06-dcce-4a77-8e8e-4550d0cff84b", 00:17:29.479 "is_configured": true, 00:17:29.479 "data_offset": 256, 00:17:29.479 "data_size": 7936 00:17:29.479 }, 00:17:29.479 { 00:17:29.479 "name": "BaseBdev2", 00:17:29.479 "uuid": "e3427090-97fc-4422-8f9e-1b39646d96fd", 00:17:29.479 "is_configured": true, 00:17:29.479 "data_offset": 256, 00:17:29.479 "data_size": 7936 00:17:29.479 } 00:17:29.479 ] 00:17:29.479 } 00:17:29.479 } 00:17:29.479 }' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:29.479 BaseBdev2' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.479 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.738 [2024-11-20 17:09:53.362962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.738 "name": "Existed_Raid", 00:17:29.738 "uuid": "b2a01e70-7e67-47e0-ba7f-7023bd27f47e", 00:17:29.738 "strip_size_kb": 0, 00:17:29.738 "state": "online", 00:17:29.738 "raid_level": "raid1", 00:17:29.738 "superblock": true, 00:17:29.738 "num_base_bdevs": 2, 00:17:29.738 "num_base_bdevs_discovered": 1, 00:17:29.738 "num_base_bdevs_operational": 1, 00:17:29.738 "base_bdevs_list": [ 00:17:29.738 { 00:17:29.738 "name": null, 00:17:29.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.738 "is_configured": false, 00:17:29.738 "data_offset": 0, 00:17:29.738 "data_size": 7936 00:17:29.738 }, 00:17:29.738 { 00:17:29.738 "name": "BaseBdev2", 00:17:29.738 "uuid": "e3427090-97fc-4422-8f9e-1b39646d96fd", 00:17:29.738 "is_configured": true, 00:17:29.738 "data_offset": 256, 00:17:29.738 "data_size": 7936 00:17:29.738 } 00:17:29.738 ] 00:17:29.738 }' 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.738 17:09:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.306 [2024-11-20 17:09:54.074945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.306 [2024-11-20 17:09:54.075064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.306 [2024-11-20 17:09:54.149079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.306 [2024-11-20 17:09:54.149133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.306 [2024-11-20 17:09:54.149166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.306 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86116 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86116 ']' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86116 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86116 00:17:30.565 killing process with pid 86116 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86116' 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86116 00:17:30.565 [2024-11-20 17:09:54.240383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.565 17:09:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86116 00:17:30.565 [2024-11-20 17:09:54.254855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.501 17:09:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:31.501 00:17:31.501 real 0m5.504s 00:17:31.501 user 0m8.386s 00:17:31.501 sys 0m0.797s 00:17:31.501 17:09:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.501 17:09:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.501 ************************************ 00:17:31.501 END TEST raid_state_function_test_sb_4k 00:17:31.501 ************************************ 00:17:31.501 17:09:55 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:31.501 17:09:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:31.501 17:09:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.501 17:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.501 ************************************ 00:17:31.501 START TEST raid_superblock_test_4k 00:17:31.501 ************************************ 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86374 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:31.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86374 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86374 ']' 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.501 17:09:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.760 [2024-11-20 17:09:55.387839] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:17:31.760 [2024-11-20 17:09:55.388058] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86374 ] 00:17:31.760 [2024-11-20 17:09:55.563068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.019 [2024-11-20 17:09:55.683728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.279 [2024-11-20 17:09:55.891655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.279 [2024-11-20 17:09:55.891693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.537 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 malloc1 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 [2024-11-20 17:09:56.428379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.796 [2024-11-20 17:09:56.428688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.796 [2024-11-20 17:09:56.428795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:32.796 [2024-11-20 17:09:56.429030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.796 [2024-11-20 17:09:56.432238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.796 [2024-11-20 17:09:56.432467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.796 pt1 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 malloc2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 [2024-11-20 17:09:56.483646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.796 [2024-11-20 17:09:56.483725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.796 [2024-11-20 17:09:56.483763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:32.796 [2024-11-20 17:09:56.483797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.796 [2024-11-20 17:09:56.486619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.796 [2024-11-20 17:09:56.486658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.796 pt2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 [2024-11-20 17:09:56.491650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.796 [2024-11-20 17:09:56.494018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.796 [2024-11-20 17:09:56.494284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:32.796 [2024-11-20 17:09:56.494307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.796 [2024-11-20 17:09:56.494606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:32.796 [2024-11-20 17:09:56.494838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:32.796 [2024-11-20 17:09:56.494863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:32.796 [2024-11-20 17:09:56.495039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.796 "name": "raid_bdev1", 00:17:32.796 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:32.796 "strip_size_kb": 0, 00:17:32.796 "state": "online", 00:17:32.796 "raid_level": "raid1", 00:17:32.796 "superblock": true, 00:17:32.796 "num_base_bdevs": 2, 00:17:32.796 "num_base_bdevs_discovered": 2, 00:17:32.796 "num_base_bdevs_operational": 2, 00:17:32.796 "base_bdevs_list": [ 00:17:32.796 { 00:17:32.796 "name": "pt1", 00:17:32.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.796 "is_configured": true, 00:17:32.796 "data_offset": 256, 00:17:32.796 "data_size": 7936 00:17:32.796 }, 00:17:32.796 { 00:17:32.796 "name": "pt2", 00:17:32.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.796 "is_configured": true, 00:17:32.796 "data_offset": 256, 00:17:32.796 "data_size": 7936 00:17:32.796 } 00:17:32.796 ] 00:17:32.796 }' 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.796 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.364 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:33.364 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:33.364 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.364 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.365 17:09:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.365 [2024-11-20 17:09:57.004151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.365 "name": "raid_bdev1", 00:17:33.365 "aliases": [ 00:17:33.365 "8f6a2a53-f5e3-4720-be6e-279493bde8ec" 00:17:33.365 ], 00:17:33.365 "product_name": "Raid Volume", 00:17:33.365 "block_size": 4096, 00:17:33.365 "num_blocks": 7936, 00:17:33.365 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:33.365 "assigned_rate_limits": { 00:17:33.365 "rw_ios_per_sec": 0, 00:17:33.365 "rw_mbytes_per_sec": 0, 00:17:33.365 "r_mbytes_per_sec": 0, 00:17:33.365 "w_mbytes_per_sec": 0 00:17:33.365 }, 00:17:33.365 "claimed": false, 00:17:33.365 "zoned": false, 00:17:33.365 "supported_io_types": { 00:17:33.365 "read": true, 00:17:33.365 "write": true, 00:17:33.365 "unmap": false, 00:17:33.365 "flush": false, 00:17:33.365 "reset": true, 00:17:33.365 "nvme_admin": false, 00:17:33.365 "nvme_io": false, 00:17:33.365 "nvme_io_md": false, 00:17:33.365 "write_zeroes": true, 00:17:33.365 "zcopy": false, 00:17:33.365 "get_zone_info": false, 00:17:33.365 "zone_management": false, 00:17:33.365 "zone_append": false, 00:17:33.365 "compare": false, 00:17:33.365 "compare_and_write": false, 00:17:33.365 "abort": false, 00:17:33.365 "seek_hole": false, 00:17:33.365 "seek_data": false, 00:17:33.365 "copy": false, 00:17:33.365 "nvme_iov_md": false 00:17:33.365 }, 00:17:33.365 "memory_domains": [ 00:17:33.365 { 00:17:33.365 "dma_device_id": "system", 00:17:33.365 "dma_device_type": 1 00:17:33.365 }, 00:17:33.365 { 00:17:33.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.365 "dma_device_type": 2 00:17:33.365 }, 00:17:33.365 { 00:17:33.365 "dma_device_id": "system", 00:17:33.365 "dma_device_type": 1 00:17:33.365 }, 00:17:33.365 { 00:17:33.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.365 "dma_device_type": 2 00:17:33.365 } 00:17:33.365 ], 00:17:33.365 "driver_specific": { 00:17:33.365 "raid": { 00:17:33.365 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:33.365 "strip_size_kb": 0, 00:17:33.365 "state": "online", 00:17:33.365 "raid_level": "raid1", 00:17:33.365 "superblock": true, 00:17:33.365 "num_base_bdevs": 2, 00:17:33.365 "num_base_bdevs_discovered": 2, 00:17:33.365 "num_base_bdevs_operational": 2, 00:17:33.365 "base_bdevs_list": [ 00:17:33.365 { 00:17:33.365 "name": "pt1", 00:17:33.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.365 "is_configured": true, 00:17:33.365 "data_offset": 256, 00:17:33.365 "data_size": 7936 00:17:33.365 }, 00:17:33.365 { 00:17:33.365 "name": "pt2", 00:17:33.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.365 "is_configured": true, 00:17:33.365 "data_offset": 256, 00:17:33.365 "data_size": 7936 00:17:33.365 } 00:17:33.365 ] 00:17:33.365 } 00:17:33.365 } 00:17:33.365 }' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:33.365 pt2' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.365 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:33.625 [2024-11-20 17:09:57.276231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8f6a2a53-f5e3-4720-be6e-279493bde8ec 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 8f6a2a53-f5e3-4720-be6e-279493bde8ec ']' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 [2024-11-20 17:09:57.327842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.625 [2024-11-20 17:09:57.327870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.625 [2024-11-20 17:09:57.327978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.625 [2024-11-20 17:09:57.328054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.625 [2024-11-20 17:09:57.328073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 [2024-11-20 17:09:57.471916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:33.625 [2024-11-20 17:09:57.474815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:33.625 [2024-11-20 17:09:57.474917] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:33.625 [2024-11-20 17:09:57.475012] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:33.625 [2024-11-20 17:09:57.475037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.625 [2024-11-20 17:09:57.475052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:33.625 request: 00:17:33.625 { 00:17:33.625 "name": "raid_bdev1", 00:17:33.625 "raid_level": "raid1", 00:17:33.625 "base_bdevs": [ 00:17:33.625 "malloc1", 00:17:33.625 "malloc2" 00:17:33.625 ], 00:17:33.625 "superblock": false, 00:17:33.625 "method": "bdev_raid_create", 00:17:33.625 "req_id": 1 00:17:33.625 } 00:17:33.625 Got JSON-RPC error response 00:17:33.625 response: 00:17:33.625 { 00:17:33.625 "code": -17, 00:17:33.625 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.625 } 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.625 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 [2024-11-20 17:09:57.531915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.891 [2024-11-20 17:09:57.531990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.891 [2024-11-20 17:09:57.532020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:33.891 [2024-11-20 17:09:57.532037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.891 [2024-11-20 17:09:57.535038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.891 [2024-11-20 17:09:57.535099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.891 [2024-11-20 17:09:57.535223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.891 [2024-11-20 17:09:57.535309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.891 pt1 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.891 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.892 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.892 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.892 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.892 "name": "raid_bdev1", 00:17:33.892 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:33.892 "strip_size_kb": 0, 00:17:33.892 "state": "configuring", 00:17:33.892 "raid_level": "raid1", 00:17:33.892 "superblock": true, 00:17:33.892 "num_base_bdevs": 2, 00:17:33.892 "num_base_bdevs_discovered": 1, 00:17:33.892 "num_base_bdevs_operational": 2, 00:17:33.892 "base_bdevs_list": [ 00:17:33.892 { 00:17:33.892 "name": "pt1", 00:17:33.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.892 "is_configured": true, 00:17:33.892 "data_offset": 256, 00:17:33.892 "data_size": 7936 00:17:33.892 }, 00:17:33.892 { 00:17:33.892 "name": null, 00:17:33.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.892 "is_configured": false, 00:17:33.892 "data_offset": 256, 00:17:33.892 "data_size": 7936 00:17:33.892 } 00:17:33.892 ] 00:17:33.892 }' 00:17:33.892 17:09:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.892 17:09:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.465 [2024-11-20 17:09:58.068210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.465 [2024-11-20 17:09:58.068494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.465 [2024-11-20 17:09:58.068541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.465 [2024-11-20 17:09:58.068560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.465 [2024-11-20 17:09:58.069345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.465 [2024-11-20 17:09:58.069385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.465 [2024-11-20 17:09:58.069475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.465 [2024-11-20 17:09:58.069546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.465 [2024-11-20 17:09:58.069710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:34.465 [2024-11-20 17:09:58.069731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.465 [2024-11-20 17:09:58.070120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:34.465 [2024-11-20 17:09:58.070375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:34.465 [2024-11-20 17:09:58.070389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:34.465 [2024-11-20 17:09:58.070556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.465 pt2 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.465 "name": "raid_bdev1", 00:17:34.465 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:34.465 "strip_size_kb": 0, 00:17:34.465 "state": "online", 00:17:34.465 "raid_level": "raid1", 00:17:34.465 "superblock": true, 00:17:34.465 "num_base_bdevs": 2, 00:17:34.465 "num_base_bdevs_discovered": 2, 00:17:34.465 "num_base_bdevs_operational": 2, 00:17:34.465 "base_bdevs_list": [ 00:17:34.465 { 00:17:34.465 "name": "pt1", 00:17:34.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.465 "is_configured": true, 00:17:34.465 "data_offset": 256, 00:17:34.465 "data_size": 7936 00:17:34.465 }, 00:17:34.465 { 00:17:34.465 "name": "pt2", 00:17:34.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.465 "is_configured": true, 00:17:34.465 "data_offset": 256, 00:17:34.465 "data_size": 7936 00:17:34.465 } 00:17:34.465 ] 00:17:34.465 }' 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.465 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.725 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.725 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.984 [2024-11-20 17:09:58.608717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.984 "name": "raid_bdev1", 00:17:34.984 "aliases": [ 00:17:34.984 "8f6a2a53-f5e3-4720-be6e-279493bde8ec" 00:17:34.984 ], 00:17:34.984 "product_name": "Raid Volume", 00:17:34.984 "block_size": 4096, 00:17:34.984 "num_blocks": 7936, 00:17:34.984 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:34.984 "assigned_rate_limits": { 00:17:34.984 "rw_ios_per_sec": 0, 00:17:34.984 "rw_mbytes_per_sec": 0, 00:17:34.984 "r_mbytes_per_sec": 0, 00:17:34.984 "w_mbytes_per_sec": 0 00:17:34.984 }, 00:17:34.984 "claimed": false, 00:17:34.984 "zoned": false, 00:17:34.984 "supported_io_types": { 00:17:34.984 "read": true, 00:17:34.984 "write": true, 00:17:34.984 "unmap": false, 00:17:34.984 "flush": false, 00:17:34.984 "reset": true, 00:17:34.984 "nvme_admin": false, 00:17:34.984 "nvme_io": false, 00:17:34.984 "nvme_io_md": false, 00:17:34.984 "write_zeroes": true, 00:17:34.984 "zcopy": false, 00:17:34.984 "get_zone_info": false, 00:17:34.984 "zone_management": false, 00:17:34.984 "zone_append": false, 00:17:34.984 "compare": false, 00:17:34.984 "compare_and_write": false, 00:17:34.984 "abort": false, 00:17:34.984 "seek_hole": false, 00:17:34.984 "seek_data": false, 00:17:34.984 "copy": false, 00:17:34.984 "nvme_iov_md": false 00:17:34.984 }, 00:17:34.984 "memory_domains": [ 00:17:34.984 { 00:17:34.984 "dma_device_id": "system", 00:17:34.984 "dma_device_type": 1 00:17:34.984 }, 00:17:34.984 { 00:17:34.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.984 "dma_device_type": 2 00:17:34.984 }, 00:17:34.984 { 00:17:34.984 "dma_device_id": "system", 00:17:34.984 "dma_device_type": 1 00:17:34.984 }, 00:17:34.984 { 00:17:34.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.984 "dma_device_type": 2 00:17:34.984 } 00:17:34.984 ], 00:17:34.984 "driver_specific": { 00:17:34.984 "raid": { 00:17:34.984 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:34.984 "strip_size_kb": 0, 00:17:34.984 "state": "online", 00:17:34.984 "raid_level": "raid1", 00:17:34.984 "superblock": true, 00:17:34.984 "num_base_bdevs": 2, 00:17:34.984 "num_base_bdevs_discovered": 2, 00:17:34.984 "num_base_bdevs_operational": 2, 00:17:34.984 "base_bdevs_list": [ 00:17:34.984 { 00:17:34.984 "name": "pt1", 00:17:34.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.984 "is_configured": true, 00:17:34.984 "data_offset": 256, 00:17:34.984 "data_size": 7936 00:17:34.984 }, 00:17:34.984 { 00:17:34.984 "name": "pt2", 00:17:34.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.984 "is_configured": true, 00:17:34.984 "data_offset": 256, 00:17:34.984 "data_size": 7936 00:17:34.984 } 00:17:34.984 ] 00:17:34.984 } 00:17:34.984 } 00:17:34.984 }' 00:17:34.984 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.985 pt2' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.985 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.244 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.244 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.245 [2024-11-20 17:09:58.876701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 8f6a2a53-f5e3-4720-be6e-279493bde8ec '!=' 8f6a2a53-f5e3-4720-be6e-279493bde8ec ']' 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.245 [2024-11-20 17:09:58.924493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.245 "name": "raid_bdev1", 00:17:35.245 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:35.245 "strip_size_kb": 0, 00:17:35.245 "state": "online", 00:17:35.245 "raid_level": "raid1", 00:17:35.245 "superblock": true, 00:17:35.245 "num_base_bdevs": 2, 00:17:35.245 "num_base_bdevs_discovered": 1, 00:17:35.245 "num_base_bdevs_operational": 1, 00:17:35.245 "base_bdevs_list": [ 00:17:35.245 { 00:17:35.245 "name": null, 00:17:35.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.245 "is_configured": false, 00:17:35.245 "data_offset": 0, 00:17:35.245 "data_size": 7936 00:17:35.245 }, 00:17:35.245 { 00:17:35.245 "name": "pt2", 00:17:35.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.245 "is_configured": true, 00:17:35.245 "data_offset": 256, 00:17:35.245 "data_size": 7936 00:17:35.245 } 00:17:35.245 ] 00:17:35.245 }' 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.245 17:09:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 [2024-11-20 17:09:59.456676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.813 [2024-11-20 17:09:59.456896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.813 [2024-11-20 17:09:59.457023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.813 [2024-11-20 17:09:59.457085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.813 [2024-11-20 17:09:59.457106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.813 [2024-11-20 17:09:59.540677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.813 [2024-11-20 17:09:59.540753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.813 [2024-11-20 17:09:59.540825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:35.813 [2024-11-20 17:09:59.540850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.813 [2024-11-20 17:09:59.544041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.813 [2024-11-20 17:09:59.544148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.813 [2024-11-20 17:09:59.544243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.813 [2024-11-20 17:09:59.544303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.813 [2024-11-20 17:09:59.544429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:35.813 [2024-11-20 17:09:59.544450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.813 [2024-11-20 17:09:59.544755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:35.813 [2024-11-20 17:09:59.545021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:35.813 [2024-11-20 17:09:59.545044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:35.813 [2024-11-20 17:09:59.545281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.813 pt2 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.813 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.814 "name": "raid_bdev1", 00:17:35.814 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:35.814 "strip_size_kb": 0, 00:17:35.814 "state": "online", 00:17:35.814 "raid_level": "raid1", 00:17:35.814 "superblock": true, 00:17:35.814 "num_base_bdevs": 2, 00:17:35.814 "num_base_bdevs_discovered": 1, 00:17:35.814 "num_base_bdevs_operational": 1, 00:17:35.814 "base_bdevs_list": [ 00:17:35.814 { 00:17:35.814 "name": null, 00:17:35.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.814 "is_configured": false, 00:17:35.814 "data_offset": 256, 00:17:35.814 "data_size": 7936 00:17:35.814 }, 00:17:35.814 { 00:17:35.814 "name": "pt2", 00:17:35.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.814 "is_configured": true, 00:17:35.814 "data_offset": 256, 00:17:35.814 "data_size": 7936 00:17:35.814 } 00:17:35.814 ] 00:17:35.814 }' 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.814 17:09:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.381 [2024-11-20 17:10:00.070361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.381 [2024-11-20 17:10:00.070419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.381 [2024-11-20 17:10:00.070493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.381 [2024-11-20 17:10:00.070550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.381 [2024-11-20 17:10:00.070564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.381 [2024-11-20 17:10:00.138414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.381 [2024-11-20 17:10:00.138497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.381 [2024-11-20 17:10:00.138524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:36.381 [2024-11-20 17:10:00.138541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.381 [2024-11-20 17:10:00.141720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.381 [2024-11-20 17:10:00.141784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.381 [2024-11-20 17:10:00.141917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:36.381 [2024-11-20 17:10:00.141974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.381 [2024-11-20 17:10:00.142180] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:36.381 [2024-11-20 17:10:00.142196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.381 [2024-11-20 17:10:00.142215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:36.381 [2024-11-20 17:10:00.142273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.381 [2024-11-20 17:10:00.142424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:36.381 [2024-11-20 17:10:00.142454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.381 [2024-11-20 17:10:00.142750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.381 [2024-11-20 17:10:00.142995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:36.381 [2024-11-20 17:10:00.143034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:36.381 [2024-11-20 17:10:00.143345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.381 pt1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.381 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.382 "name": "raid_bdev1", 00:17:36.382 "uuid": "8f6a2a53-f5e3-4720-be6e-279493bde8ec", 00:17:36.382 "strip_size_kb": 0, 00:17:36.382 "state": "online", 00:17:36.382 "raid_level": "raid1", 00:17:36.382 "superblock": true, 00:17:36.382 "num_base_bdevs": 2, 00:17:36.382 "num_base_bdevs_discovered": 1, 00:17:36.382 "num_base_bdevs_operational": 1, 00:17:36.382 "base_bdevs_list": [ 00:17:36.382 { 00:17:36.382 "name": null, 00:17:36.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.382 "is_configured": false, 00:17:36.382 "data_offset": 256, 00:17:36.382 "data_size": 7936 00:17:36.382 }, 00:17:36.382 { 00:17:36.382 "name": "pt2", 00:17:36.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.382 "is_configured": true, 00:17:36.382 "data_offset": 256, 00:17:36.382 "data_size": 7936 00:17:36.382 } 00:17:36.382 ] 00:17:36.382 }' 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.382 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.948 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:36.948 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:36.948 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.949 [2024-11-20 17:10:00.747066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 8f6a2a53-f5e3-4720-be6e-279493bde8ec '!=' 8f6a2a53-f5e3-4720-be6e-279493bde8ec ']' 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86374 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86374 ']' 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86374 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.949 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86374 00:17:37.207 killing process with pid 86374 00:17:37.207 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.207 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.207 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86374' 00:17:37.207 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86374 00:17:37.207 [2024-11-20 17:10:00.825204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.207 [2024-11-20 17:10:00.825323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.207 17:10:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86374 00:17:37.207 [2024-11-20 17:10:00.825391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.207 [2024-11-20 17:10:00.825420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:37.207 [2024-11-20 17:10:00.993676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.585 17:10:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:38.585 00:17:38.585 real 0m6.739s 00:17:38.585 user 0m10.675s 00:17:38.585 sys 0m1.025s 00:17:38.585 17:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.585 ************************************ 00:17:38.585 END TEST raid_superblock_test_4k 00:17:38.585 ************************************ 00:17:38.585 17:10:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.585 17:10:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:38.585 17:10:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:38.585 17:10:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:38.585 17:10:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.585 17:10:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.585 ************************************ 00:17:38.585 START TEST raid_rebuild_test_sb_4k 00:17:38.585 ************************************ 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86702 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86702 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86702 ']' 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.585 17:10:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.585 [2024-11-20 17:10:02.214310] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:17:38.585 [2024-11-20 17:10:02.214816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86702 ] 00:17:38.585 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:38.585 Zero copy mechanism will not be used. 00:17:38.585 [2024-11-20 17:10:02.404644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.845 [2024-11-20 17:10:02.555352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.104 [2024-11-20 17:10:02.763157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.104 [2024-11-20 17:10:02.763452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.364 BaseBdev1_malloc 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.364 [2024-11-20 17:10:03.220312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.364 [2024-11-20 17:10:03.220571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.364 [2024-11-20 17:10:03.220618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.364 [2024-11-20 17:10:03.220638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.364 [2024-11-20 17:10:03.223732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.364 [2024-11-20 17:10:03.223929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.364 BaseBdev1 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.364 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 BaseBdev2_malloc 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 [2024-11-20 17:10:03.276868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:39.624 [2024-11-20 17:10:03.277162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.624 [2024-11-20 17:10:03.277203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.624 [2024-11-20 17:10:03.277222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.624 [2024-11-20 17:10:03.280198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.624 [2024-11-20 17:10:03.280402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.624 BaseBdev2 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 spare_malloc 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 spare_delay 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 [2024-11-20 17:10:03.347985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.624 [2024-11-20 17:10:03.348243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.624 [2024-11-20 17:10:03.348280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:39.624 [2024-11-20 17:10:03.348301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.624 [2024-11-20 17:10:03.351317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.624 [2024-11-20 17:10:03.351524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.624 spare 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.624 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.624 [2024-11-20 17:10:03.360334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.625 [2024-11-20 17:10:03.363181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.625 [2024-11-20 17:10:03.363563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.625 [2024-11-20 17:10:03.363715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.625 [2024-11-20 17:10:03.364151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:39.625 [2024-11-20 17:10:03.364558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.625 [2024-11-20 17:10:03.364679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.625 [2024-11-20 17:10:03.365054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.625 "name": "raid_bdev1", 00:17:39.625 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:39.625 "strip_size_kb": 0, 00:17:39.625 "state": "online", 00:17:39.625 "raid_level": "raid1", 00:17:39.625 "superblock": true, 00:17:39.625 "num_base_bdevs": 2, 00:17:39.625 "num_base_bdevs_discovered": 2, 00:17:39.625 "num_base_bdevs_operational": 2, 00:17:39.625 "base_bdevs_list": [ 00:17:39.625 { 00:17:39.625 "name": "BaseBdev1", 00:17:39.625 "uuid": "bdf2d918-f1fb-5fc3-9df7-d8106f377798", 00:17:39.625 "is_configured": true, 00:17:39.625 "data_offset": 256, 00:17:39.625 "data_size": 7936 00:17:39.625 }, 00:17:39.625 { 00:17:39.625 "name": "BaseBdev2", 00:17:39.625 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:39.625 "is_configured": true, 00:17:39.625 "data_offset": 256, 00:17:39.625 "data_size": 7936 00:17:39.625 } 00:17:39.625 ] 00:17:39.625 }' 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.625 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.194 [2024-11-20 17:10:03.853512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.194 17:10:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:40.453 [2024-11-20 17:10:04.185350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:40.453 /dev/nbd0 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:40.453 1+0 records in 00:17:40.453 1+0 records out 00:17:40.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031066 s, 13.2 MB/s 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:40.453 17:10:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:41.389 7936+0 records in 00:17:41.389 7936+0 records out 00:17:41.389 32505856 bytes (33 MB, 31 MiB) copied, 0.861819 s, 37.7 MB/s 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.389 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:41.649 [2024-11-20 17:10:05.398674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 [2024-11-20 17:10:05.416639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.649 "name": "raid_bdev1", 00:17:41.649 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:41.649 "strip_size_kb": 0, 00:17:41.649 "state": "online", 00:17:41.649 "raid_level": "raid1", 00:17:41.649 "superblock": true, 00:17:41.649 "num_base_bdevs": 2, 00:17:41.649 "num_base_bdevs_discovered": 1, 00:17:41.649 "num_base_bdevs_operational": 1, 00:17:41.649 "base_bdevs_list": [ 00:17:41.649 { 00:17:41.649 "name": null, 00:17:41.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.649 "is_configured": false, 00:17:41.649 "data_offset": 0, 00:17:41.649 "data_size": 7936 00:17:41.649 }, 00:17:41.649 { 00:17:41.649 "name": "BaseBdev2", 00:17:41.649 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:41.649 "is_configured": true, 00:17:41.649 "data_offset": 256, 00:17:41.649 "data_size": 7936 00:17:41.649 } 00:17:41.649 ] 00:17:41.649 }' 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.649 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.217 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.217 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.217 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.217 [2024-11-20 17:10:05.884938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.217 [2024-11-20 17:10:05.901929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:42.217 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.217 17:10:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:42.217 [2024-11-20 17:10:05.904624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.154 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.154 "name": "raid_bdev1", 00:17:43.154 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:43.154 "strip_size_kb": 0, 00:17:43.154 "state": "online", 00:17:43.154 "raid_level": "raid1", 00:17:43.154 "superblock": true, 00:17:43.154 "num_base_bdevs": 2, 00:17:43.154 "num_base_bdevs_discovered": 2, 00:17:43.154 "num_base_bdevs_operational": 2, 00:17:43.154 "process": { 00:17:43.154 "type": "rebuild", 00:17:43.154 "target": "spare", 00:17:43.154 "progress": { 00:17:43.154 "blocks": 2560, 00:17:43.154 "percent": 32 00:17:43.154 } 00:17:43.155 }, 00:17:43.155 "base_bdevs_list": [ 00:17:43.155 { 00:17:43.155 "name": "spare", 00:17:43.155 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:43.155 "is_configured": true, 00:17:43.155 "data_offset": 256, 00:17:43.155 "data_size": 7936 00:17:43.155 }, 00:17:43.155 { 00:17:43.155 "name": "BaseBdev2", 00:17:43.155 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:43.155 "is_configured": true, 00:17:43.155 "data_offset": 256, 00:17:43.155 "data_size": 7936 00:17:43.155 } 00:17:43.155 ] 00:17:43.155 }' 00:17:43.155 17:10:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.155 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.155 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.414 [2024-11-20 17:10:07.077954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.414 [2024-11-20 17:10:07.113088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.414 [2024-11-20 17:10:07.113176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.414 [2024-11-20 17:10:07.113196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.414 [2024-11-20 17:10:07.113209] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.414 "name": "raid_bdev1", 00:17:43.414 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:43.414 "strip_size_kb": 0, 00:17:43.414 "state": "online", 00:17:43.414 "raid_level": "raid1", 00:17:43.414 "superblock": true, 00:17:43.414 "num_base_bdevs": 2, 00:17:43.414 "num_base_bdevs_discovered": 1, 00:17:43.414 "num_base_bdevs_operational": 1, 00:17:43.414 "base_bdevs_list": [ 00:17:43.414 { 00:17:43.414 "name": null, 00:17:43.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.414 "is_configured": false, 00:17:43.414 "data_offset": 0, 00:17:43.414 "data_size": 7936 00:17:43.414 }, 00:17:43.414 { 00:17:43.414 "name": "BaseBdev2", 00:17:43.414 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:43.414 "is_configured": true, 00:17:43.414 "data_offset": 256, 00:17:43.414 "data_size": 7936 00:17:43.414 } 00:17:43.414 ] 00:17:43.414 }' 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.414 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.984 "name": "raid_bdev1", 00:17:43.984 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:43.984 "strip_size_kb": 0, 00:17:43.984 "state": "online", 00:17:43.984 "raid_level": "raid1", 00:17:43.984 "superblock": true, 00:17:43.984 "num_base_bdevs": 2, 00:17:43.984 "num_base_bdevs_discovered": 1, 00:17:43.984 "num_base_bdevs_operational": 1, 00:17:43.984 "base_bdevs_list": [ 00:17:43.984 { 00:17:43.984 "name": null, 00:17:43.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.984 "is_configured": false, 00:17:43.984 "data_offset": 0, 00:17:43.984 "data_size": 7936 00:17:43.984 }, 00:17:43.984 { 00:17:43.984 "name": "BaseBdev2", 00:17:43.984 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:43.984 "is_configured": true, 00:17:43.984 "data_offset": 256, 00:17:43.984 "data_size": 7936 00:17:43.984 } 00:17:43.984 ] 00:17:43.984 }' 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 [2024-11-20 17:10:07.814097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.984 [2024-11-20 17:10:07.829978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.984 17:10:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:43.984 [2024-11-20 17:10:07.832718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.363 "name": "raid_bdev1", 00:17:45.363 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:45.363 "strip_size_kb": 0, 00:17:45.363 "state": "online", 00:17:45.363 "raid_level": "raid1", 00:17:45.363 "superblock": true, 00:17:45.363 "num_base_bdevs": 2, 00:17:45.363 "num_base_bdevs_discovered": 2, 00:17:45.363 "num_base_bdevs_operational": 2, 00:17:45.363 "process": { 00:17:45.363 "type": "rebuild", 00:17:45.363 "target": "spare", 00:17:45.363 "progress": { 00:17:45.363 "blocks": 2560, 00:17:45.363 "percent": 32 00:17:45.363 } 00:17:45.363 }, 00:17:45.363 "base_bdevs_list": [ 00:17:45.363 { 00:17:45.363 "name": "spare", 00:17:45.363 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:45.363 "is_configured": true, 00:17:45.363 "data_offset": 256, 00:17:45.363 "data_size": 7936 00:17:45.363 }, 00:17:45.363 { 00:17:45.363 "name": "BaseBdev2", 00:17:45.363 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:45.363 "is_configured": true, 00:17:45.363 "data_offset": 256, 00:17:45.363 "data_size": 7936 00:17:45.363 } 00:17:45.363 ] 00:17:45.363 }' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:45.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=724 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.363 17:10:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.363 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.363 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.363 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.363 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.363 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.364 "name": "raid_bdev1", 00:17:45.364 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:45.364 "strip_size_kb": 0, 00:17:45.364 "state": "online", 00:17:45.364 "raid_level": "raid1", 00:17:45.364 "superblock": true, 00:17:45.364 "num_base_bdevs": 2, 00:17:45.364 "num_base_bdevs_discovered": 2, 00:17:45.364 "num_base_bdevs_operational": 2, 00:17:45.364 "process": { 00:17:45.364 "type": "rebuild", 00:17:45.364 "target": "spare", 00:17:45.364 "progress": { 00:17:45.364 "blocks": 2816, 00:17:45.364 "percent": 35 00:17:45.364 } 00:17:45.364 }, 00:17:45.364 "base_bdevs_list": [ 00:17:45.364 { 00:17:45.364 "name": "spare", 00:17:45.364 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:45.364 "is_configured": true, 00:17:45.364 "data_offset": 256, 00:17:45.364 "data_size": 7936 00:17:45.364 }, 00:17:45.364 { 00:17:45.364 "name": "BaseBdev2", 00:17:45.364 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:45.364 "is_configured": true, 00:17:45.364 "data_offset": 256, 00:17:45.364 "data_size": 7936 00:17:45.364 } 00:17:45.364 ] 00:17:45.364 }' 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.364 17:10:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.301 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.562 "name": "raid_bdev1", 00:17:46.562 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:46.562 "strip_size_kb": 0, 00:17:46.562 "state": "online", 00:17:46.562 "raid_level": "raid1", 00:17:46.562 "superblock": true, 00:17:46.562 "num_base_bdevs": 2, 00:17:46.562 "num_base_bdevs_discovered": 2, 00:17:46.562 "num_base_bdevs_operational": 2, 00:17:46.562 "process": { 00:17:46.562 "type": "rebuild", 00:17:46.562 "target": "spare", 00:17:46.562 "progress": { 00:17:46.562 "blocks": 5888, 00:17:46.562 "percent": 74 00:17:46.562 } 00:17:46.562 }, 00:17:46.562 "base_bdevs_list": [ 00:17:46.562 { 00:17:46.562 "name": "spare", 00:17:46.562 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:46.562 "is_configured": true, 00:17:46.562 "data_offset": 256, 00:17:46.562 "data_size": 7936 00:17:46.562 }, 00:17:46.562 { 00:17:46.562 "name": "BaseBdev2", 00:17:46.562 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:46.562 "is_configured": true, 00:17:46.562 "data_offset": 256, 00:17:46.562 "data_size": 7936 00:17:46.562 } 00:17:46.562 ] 00:17:46.562 }' 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.562 17:10:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.131 [2024-11-20 17:10:10.954588] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:47.131 [2024-11-20 17:10:10.954671] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:47.131 [2024-11-20 17:10:10.954882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.699 "name": "raid_bdev1", 00:17:47.699 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:47.699 "strip_size_kb": 0, 00:17:47.699 "state": "online", 00:17:47.699 "raid_level": "raid1", 00:17:47.699 "superblock": true, 00:17:47.699 "num_base_bdevs": 2, 00:17:47.699 "num_base_bdevs_discovered": 2, 00:17:47.699 "num_base_bdevs_operational": 2, 00:17:47.699 "base_bdevs_list": [ 00:17:47.699 { 00:17:47.699 "name": "spare", 00:17:47.699 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:47.699 "is_configured": true, 00:17:47.699 "data_offset": 256, 00:17:47.699 "data_size": 7936 00:17:47.699 }, 00:17:47.699 { 00:17:47.699 "name": "BaseBdev2", 00:17:47.699 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:47.699 "is_configured": true, 00:17:47.699 "data_offset": 256, 00:17:47.699 "data_size": 7936 00:17:47.699 } 00:17:47.699 ] 00:17:47.699 }' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.699 "name": "raid_bdev1", 00:17:47.699 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:47.699 "strip_size_kb": 0, 00:17:47.699 "state": "online", 00:17:47.699 "raid_level": "raid1", 00:17:47.699 "superblock": true, 00:17:47.699 "num_base_bdevs": 2, 00:17:47.699 "num_base_bdevs_discovered": 2, 00:17:47.699 "num_base_bdevs_operational": 2, 00:17:47.699 "base_bdevs_list": [ 00:17:47.699 { 00:17:47.699 "name": "spare", 00:17:47.699 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:47.699 "is_configured": true, 00:17:47.699 "data_offset": 256, 00:17:47.699 "data_size": 7936 00:17:47.699 }, 00:17:47.699 { 00:17:47.699 "name": "BaseBdev2", 00:17:47.699 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:47.699 "is_configured": true, 00:17:47.699 "data_offset": 256, 00:17:47.699 "data_size": 7936 00:17:47.699 } 00:17:47.699 ] 00:17:47.699 }' 00:17:47.699 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.958 "name": "raid_bdev1", 00:17:47.958 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:47.958 "strip_size_kb": 0, 00:17:47.958 "state": "online", 00:17:47.958 "raid_level": "raid1", 00:17:47.958 "superblock": true, 00:17:47.958 "num_base_bdevs": 2, 00:17:47.958 "num_base_bdevs_discovered": 2, 00:17:47.958 "num_base_bdevs_operational": 2, 00:17:47.958 "base_bdevs_list": [ 00:17:47.958 { 00:17:47.958 "name": "spare", 00:17:47.958 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:47.958 "is_configured": true, 00:17:47.958 "data_offset": 256, 00:17:47.958 "data_size": 7936 00:17:47.958 }, 00:17:47.958 { 00:17:47.958 "name": "BaseBdev2", 00:17:47.958 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:47.958 "is_configured": true, 00:17:47.958 "data_offset": 256, 00:17:47.958 "data_size": 7936 00:17:47.958 } 00:17:47.958 ] 00:17:47.958 }' 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.958 17:10:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.527 [2024-11-20 17:10:12.162175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.527 [2024-11-20 17:10:12.162216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.527 [2024-11-20 17:10:12.162343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.527 [2024-11-20 17:10:12.162481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.527 [2024-11-20 17:10:12.162501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.527 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:48.786 /dev/nbd0 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.786 1+0 records in 00:17:48.786 1+0 records out 00:17:48.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645513 s, 6.3 MB/s 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:48.786 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:49.045 /dev/nbd1 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.305 1+0 records in 00:17:49.305 1+0 records out 00:17:49.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362148 s, 11.3 MB/s 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.305 17:10:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.305 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:49.564 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.565 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.824 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.825 [2024-11-20 17:10:13.683658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.825 [2024-11-20 17:10:13.683717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.825 [2024-11-20 17:10:13.683778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:49.825 [2024-11-20 17:10:13.683811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.825 [2024-11-20 17:10:13.687037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.825 [2024-11-20 17:10:13.687097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.825 [2024-11-20 17:10:13.687278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.825 [2024-11-20 17:10:13.687369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.825 [2024-11-20 17:10:13.687642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.825 spare 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.825 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.085 [2024-11-20 17:10:13.787851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:50.085 [2024-11-20 17:10:13.787890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.085 [2024-11-20 17:10:13.788309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:50.085 [2024-11-20 17:10:13.788543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:50.085 [2024-11-20 17:10:13.788559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:50.085 [2024-11-20 17:10:13.788788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.085 "name": "raid_bdev1", 00:17:50.085 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:50.085 "strip_size_kb": 0, 00:17:50.085 "state": "online", 00:17:50.085 "raid_level": "raid1", 00:17:50.085 "superblock": true, 00:17:50.085 "num_base_bdevs": 2, 00:17:50.085 "num_base_bdevs_discovered": 2, 00:17:50.085 "num_base_bdevs_operational": 2, 00:17:50.085 "base_bdevs_list": [ 00:17:50.085 { 00:17:50.085 "name": "spare", 00:17:50.085 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:50.085 "is_configured": true, 00:17:50.085 "data_offset": 256, 00:17:50.085 "data_size": 7936 00:17:50.085 }, 00:17:50.085 { 00:17:50.085 "name": "BaseBdev2", 00:17:50.085 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:50.085 "is_configured": true, 00:17:50.085 "data_offset": 256, 00:17:50.085 "data_size": 7936 00:17:50.085 } 00:17:50.085 ] 00:17:50.085 }' 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.085 17:10:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.653 "name": "raid_bdev1", 00:17:50.653 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:50.653 "strip_size_kb": 0, 00:17:50.653 "state": "online", 00:17:50.653 "raid_level": "raid1", 00:17:50.653 "superblock": true, 00:17:50.653 "num_base_bdevs": 2, 00:17:50.653 "num_base_bdevs_discovered": 2, 00:17:50.653 "num_base_bdevs_operational": 2, 00:17:50.653 "base_bdevs_list": [ 00:17:50.653 { 00:17:50.653 "name": "spare", 00:17:50.653 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:50.653 "is_configured": true, 00:17:50.653 "data_offset": 256, 00:17:50.653 "data_size": 7936 00:17:50.653 }, 00:17:50.653 { 00:17:50.653 "name": "BaseBdev2", 00:17:50.653 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:50.653 "is_configured": true, 00:17:50.653 "data_offset": 256, 00:17:50.653 "data_size": 7936 00:17:50.653 } 00:17:50.653 ] 00:17:50.653 }' 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.653 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.913 [2024-11-20 17:10:14.532017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.913 "name": "raid_bdev1", 00:17:50.913 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:50.913 "strip_size_kb": 0, 00:17:50.913 "state": "online", 00:17:50.913 "raid_level": "raid1", 00:17:50.913 "superblock": true, 00:17:50.913 "num_base_bdevs": 2, 00:17:50.913 "num_base_bdevs_discovered": 1, 00:17:50.913 "num_base_bdevs_operational": 1, 00:17:50.913 "base_bdevs_list": [ 00:17:50.913 { 00:17:50.913 "name": null, 00:17:50.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.913 "is_configured": false, 00:17:50.913 "data_offset": 0, 00:17:50.913 "data_size": 7936 00:17:50.913 }, 00:17:50.913 { 00:17:50.913 "name": "BaseBdev2", 00:17:50.913 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:50.913 "is_configured": true, 00:17:50.913 "data_offset": 256, 00:17:50.913 "data_size": 7936 00:17:50.913 } 00:17:50.913 ] 00:17:50.913 }' 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.913 17:10:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.482 17:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.482 17:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.482 17:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.482 [2024-11-20 17:10:15.068279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.482 [2024-11-20 17:10:15.068734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.482 [2024-11-20 17:10:15.068781] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.482 [2024-11-20 17:10:15.068843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.482 [2024-11-20 17:10:15.085237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:51.482 17:10:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.482 17:10:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:51.482 [2024-11-20 17:10:15.087865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.421 "name": "raid_bdev1", 00:17:52.421 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:52.421 "strip_size_kb": 0, 00:17:52.421 "state": "online", 00:17:52.421 "raid_level": "raid1", 00:17:52.421 "superblock": true, 00:17:52.421 "num_base_bdevs": 2, 00:17:52.421 "num_base_bdevs_discovered": 2, 00:17:52.421 "num_base_bdevs_operational": 2, 00:17:52.421 "process": { 00:17:52.421 "type": "rebuild", 00:17:52.421 "target": "spare", 00:17:52.421 "progress": { 00:17:52.421 "blocks": 2560, 00:17:52.421 "percent": 32 00:17:52.421 } 00:17:52.421 }, 00:17:52.421 "base_bdevs_list": [ 00:17:52.421 { 00:17:52.421 "name": "spare", 00:17:52.421 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:52.421 "is_configured": true, 00:17:52.421 "data_offset": 256, 00:17:52.421 "data_size": 7936 00:17:52.421 }, 00:17:52.421 { 00:17:52.421 "name": "BaseBdev2", 00:17:52.421 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:52.421 "is_configured": true, 00:17:52.421 "data_offset": 256, 00:17:52.421 "data_size": 7936 00:17:52.421 } 00:17:52.421 ] 00:17:52.421 }' 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.421 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.421 [2024-11-20 17:10:16.265166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.681 [2024-11-20 17:10:16.296201] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.681 [2024-11-20 17:10:16.296492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.681 [2024-11-20 17:10:16.296518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.681 [2024-11-20 17:10:16.296533] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.681 "name": "raid_bdev1", 00:17:52.681 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:52.681 "strip_size_kb": 0, 00:17:52.681 "state": "online", 00:17:52.681 "raid_level": "raid1", 00:17:52.681 "superblock": true, 00:17:52.681 "num_base_bdevs": 2, 00:17:52.681 "num_base_bdevs_discovered": 1, 00:17:52.681 "num_base_bdevs_operational": 1, 00:17:52.681 "base_bdevs_list": [ 00:17:52.681 { 00:17:52.681 "name": null, 00:17:52.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.681 "is_configured": false, 00:17:52.681 "data_offset": 0, 00:17:52.681 "data_size": 7936 00:17:52.681 }, 00:17:52.681 { 00:17:52.681 "name": "BaseBdev2", 00:17:52.681 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:52.681 "is_configured": true, 00:17:52.681 "data_offset": 256, 00:17:52.681 "data_size": 7936 00:17:52.681 } 00:17:52.681 ] 00:17:52.681 }' 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.681 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.250 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.250 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.250 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.250 [2024-11-20 17:10:16.850685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.250 [2024-11-20 17:10:16.850942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.250 [2024-11-20 17:10:16.850981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:53.250 [2024-11-20 17:10:16.851000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.250 [2024-11-20 17:10:16.851656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.250 [2024-11-20 17:10:16.851695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.250 [2024-11-20 17:10:16.851822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.250 [2024-11-20 17:10:16.851847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.250 [2024-11-20 17:10:16.851863] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.250 [2024-11-20 17:10:16.851905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.250 [2024-11-20 17:10:16.866243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:53.250 spare 00:17:53.250 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.250 17:10:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:53.250 [2024-11-20 17:10:16.869164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.204 "name": "raid_bdev1", 00:17:54.204 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:54.204 "strip_size_kb": 0, 00:17:54.204 "state": "online", 00:17:54.204 "raid_level": "raid1", 00:17:54.204 "superblock": true, 00:17:54.204 "num_base_bdevs": 2, 00:17:54.204 "num_base_bdevs_discovered": 2, 00:17:54.204 "num_base_bdevs_operational": 2, 00:17:54.204 "process": { 00:17:54.204 "type": "rebuild", 00:17:54.204 "target": "spare", 00:17:54.204 "progress": { 00:17:54.204 "blocks": 2560, 00:17:54.204 "percent": 32 00:17:54.204 } 00:17:54.204 }, 00:17:54.204 "base_bdevs_list": [ 00:17:54.204 { 00:17:54.204 "name": "spare", 00:17:54.204 "uuid": "91da3cb7-9421-55e6-8b63-3e7a4f57c79b", 00:17:54.204 "is_configured": true, 00:17:54.204 "data_offset": 256, 00:17:54.204 "data_size": 7936 00:17:54.204 }, 00:17:54.204 { 00:17:54.204 "name": "BaseBdev2", 00:17:54.204 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:54.204 "is_configured": true, 00:17:54.204 "data_offset": 256, 00:17:54.204 "data_size": 7936 00:17:54.204 } 00:17:54.204 ] 00:17:54.204 }' 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.204 17:10:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.204 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.204 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.204 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.204 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.204 [2024-11-20 17:10:18.039253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.520 [2024-11-20 17:10:18.077754] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.520 [2024-11-20 17:10:18.077840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.520 [2024-11-20 17:10:18.077864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.520 [2024-11-20 17:10:18.077874] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.520 "name": "raid_bdev1", 00:17:54.520 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:54.520 "strip_size_kb": 0, 00:17:54.520 "state": "online", 00:17:54.520 "raid_level": "raid1", 00:17:54.520 "superblock": true, 00:17:54.520 "num_base_bdevs": 2, 00:17:54.520 "num_base_bdevs_discovered": 1, 00:17:54.520 "num_base_bdevs_operational": 1, 00:17:54.520 "base_bdevs_list": [ 00:17:54.520 { 00:17:54.520 "name": null, 00:17:54.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.520 "is_configured": false, 00:17:54.520 "data_offset": 0, 00:17:54.520 "data_size": 7936 00:17:54.520 }, 00:17:54.520 { 00:17:54.520 "name": "BaseBdev2", 00:17:54.520 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:54.520 "is_configured": true, 00:17:54.520 "data_offset": 256, 00:17:54.520 "data_size": 7936 00:17:54.520 } 00:17:54.520 ] 00:17:54.520 }' 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.520 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.778 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.037 "name": "raid_bdev1", 00:17:55.037 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:55.037 "strip_size_kb": 0, 00:17:55.037 "state": "online", 00:17:55.037 "raid_level": "raid1", 00:17:55.037 "superblock": true, 00:17:55.037 "num_base_bdevs": 2, 00:17:55.037 "num_base_bdevs_discovered": 1, 00:17:55.037 "num_base_bdevs_operational": 1, 00:17:55.037 "base_bdevs_list": [ 00:17:55.037 { 00:17:55.037 "name": null, 00:17:55.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.037 "is_configured": false, 00:17:55.037 "data_offset": 0, 00:17:55.037 "data_size": 7936 00:17:55.037 }, 00:17:55.037 { 00:17:55.037 "name": "BaseBdev2", 00:17:55.037 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:55.037 "is_configured": true, 00:17:55.037 "data_offset": 256, 00:17:55.037 "data_size": 7936 00:17:55.037 } 00:17:55.037 ] 00:17:55.037 }' 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.037 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.037 [2024-11-20 17:10:18.805179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:55.037 [2024-11-20 17:10:18.805261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.037 [2024-11-20 17:10:18.805313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:55.038 [2024-11-20 17:10:18.805337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.038 [2024-11-20 17:10:18.805967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.038 [2024-11-20 17:10:18.805999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:55.038 [2024-11-20 17:10:18.806092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:55.038 [2024-11-20 17:10:18.806132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.038 [2024-11-20 17:10:18.806178] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.038 [2024-11-20 17:10:18.806198] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:55.038 BaseBdev1 00:17:55.038 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.038 17:10:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.974 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.233 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.233 "name": "raid_bdev1", 00:17:56.233 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:56.233 "strip_size_kb": 0, 00:17:56.233 "state": "online", 00:17:56.233 "raid_level": "raid1", 00:17:56.233 "superblock": true, 00:17:56.233 "num_base_bdevs": 2, 00:17:56.233 "num_base_bdevs_discovered": 1, 00:17:56.233 "num_base_bdevs_operational": 1, 00:17:56.233 "base_bdevs_list": [ 00:17:56.233 { 00:17:56.233 "name": null, 00:17:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.233 "is_configured": false, 00:17:56.233 "data_offset": 0, 00:17:56.233 "data_size": 7936 00:17:56.233 }, 00:17:56.233 { 00:17:56.233 "name": "BaseBdev2", 00:17:56.233 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:56.233 "is_configured": true, 00:17:56.233 "data_offset": 256, 00:17:56.233 "data_size": 7936 00:17:56.233 } 00:17:56.233 ] 00:17:56.233 }' 00:17:56.233 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.233 17:10:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.491 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.749 "name": "raid_bdev1", 00:17:56.749 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:56.749 "strip_size_kb": 0, 00:17:56.749 "state": "online", 00:17:56.749 "raid_level": "raid1", 00:17:56.749 "superblock": true, 00:17:56.749 "num_base_bdevs": 2, 00:17:56.749 "num_base_bdevs_discovered": 1, 00:17:56.749 "num_base_bdevs_operational": 1, 00:17:56.749 "base_bdevs_list": [ 00:17:56.749 { 00:17:56.749 "name": null, 00:17:56.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.749 "is_configured": false, 00:17:56.749 "data_offset": 0, 00:17:56.749 "data_size": 7936 00:17:56.749 }, 00:17:56.749 { 00:17:56.749 "name": "BaseBdev2", 00:17:56.749 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:56.749 "is_configured": true, 00:17:56.749 "data_offset": 256, 00:17:56.749 "data_size": 7936 00:17:56.749 } 00:17:56.749 ] 00:17:56.749 }' 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.749 [2024-11-20 17:10:20.497737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.749 [2024-11-20 17:10:20.498009] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.749 [2024-11-20 17:10:20.498033] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.749 request: 00:17:56.749 { 00:17:56.749 "base_bdev": "BaseBdev1", 00:17:56.749 "raid_bdev": "raid_bdev1", 00:17:56.749 "method": "bdev_raid_add_base_bdev", 00:17:56.749 "req_id": 1 00:17:56.749 } 00:17:56.749 Got JSON-RPC error response 00:17:56.749 response: 00:17:56.749 { 00:17:56.749 "code": -22, 00:17:56.749 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:56.749 } 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.749 17:10:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.687 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.946 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.946 "name": "raid_bdev1", 00:17:57.946 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:57.946 "strip_size_kb": 0, 00:17:57.946 "state": "online", 00:17:57.946 "raid_level": "raid1", 00:17:57.946 "superblock": true, 00:17:57.946 "num_base_bdevs": 2, 00:17:57.946 "num_base_bdevs_discovered": 1, 00:17:57.946 "num_base_bdevs_operational": 1, 00:17:57.946 "base_bdevs_list": [ 00:17:57.946 { 00:17:57.946 "name": null, 00:17:57.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.946 "is_configured": false, 00:17:57.946 "data_offset": 0, 00:17:57.946 "data_size": 7936 00:17:57.946 }, 00:17:57.946 { 00:17:57.946 "name": "BaseBdev2", 00:17:57.946 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:57.946 "is_configured": true, 00:17:57.946 "data_offset": 256, 00:17:57.946 "data_size": 7936 00:17:57.946 } 00:17:57.946 ] 00:17:57.946 }' 00:17:57.946 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.946 17:10:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.205 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.464 "name": "raid_bdev1", 00:17:58.464 "uuid": "89a93e59-5a97-4df2-867c-30e7752c2c71", 00:17:58.464 "strip_size_kb": 0, 00:17:58.464 "state": "online", 00:17:58.464 "raid_level": "raid1", 00:17:58.464 "superblock": true, 00:17:58.464 "num_base_bdevs": 2, 00:17:58.464 "num_base_bdevs_discovered": 1, 00:17:58.464 "num_base_bdevs_operational": 1, 00:17:58.464 "base_bdevs_list": [ 00:17:58.464 { 00:17:58.464 "name": null, 00:17:58.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.464 "is_configured": false, 00:17:58.464 "data_offset": 0, 00:17:58.464 "data_size": 7936 00:17:58.464 }, 00:17:58.464 { 00:17:58.464 "name": "BaseBdev2", 00:17:58.464 "uuid": "aaf3c948-4e72-5836-9969-6927e562c471", 00:17:58.464 "is_configured": true, 00:17:58.464 "data_offset": 256, 00:17:58.464 "data_size": 7936 00:17:58.464 } 00:17:58.464 ] 00:17:58.464 }' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86702 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86702 ']' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86702 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86702 00:17:58.464 killing process with pid 86702 00:17:58.464 Received shutdown signal, test time was about 60.000000 seconds 00:17:58.464 00:17:58.464 Latency(us) 00:17:58.464 [2024-11-20T17:10:22.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.464 [2024-11-20T17:10:22.333Z] =================================================================================================================== 00:17:58.464 [2024-11-20T17:10:22.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86702' 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86702 00:17:58.464 [2024-11-20 17:10:22.249807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.464 17:10:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86702 00:17:58.464 [2024-11-20 17:10:22.249995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.464 [2024-11-20 17:10:22.250070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.464 [2024-11-20 17:10:22.250089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:58.723 [2024-11-20 17:10:22.500145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.100 17:10:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:00.100 00:18:00.100 real 0m21.465s 00:18:00.100 user 0m28.995s 00:18:00.100 sys 0m2.628s 00:18:00.100 17:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.100 17:10:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.100 ************************************ 00:18:00.100 END TEST raid_rebuild_test_sb_4k 00:18:00.100 ************************************ 00:18:00.100 17:10:23 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:00.100 17:10:23 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:00.100 17:10:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:00.100 17:10:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.100 17:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.100 ************************************ 00:18:00.100 START TEST raid_state_function_test_sb_md_separate 00:18:00.100 ************************************ 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87405 00:18:00.100 Process raid pid: 87405 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87405' 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87405 00:18:00.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87405 ']' 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.100 17:10:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.100 [2024-11-20 17:10:23.768313] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:00.100 [2024-11-20 17:10:23.768900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.359 [2024-11-20 17:10:23.983056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.359 [2024-11-20 17:10:24.100492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.618 [2024-11-20 17:10:24.308405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.618 [2024-11-20 17:10:24.308470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.877 [2024-11-20 17:10:24.648022] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.877 [2024-11-20 17:10:24.648223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.877 [2024-11-20 17:10:24.648251] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.877 [2024-11-20 17:10:24.648277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.877 "name": "Existed_Raid", 00:18:00.877 "uuid": "2c66b570-9c32-43db-8a0d-809537907812", 00:18:00.877 "strip_size_kb": 0, 00:18:00.877 "state": "configuring", 00:18:00.877 "raid_level": "raid1", 00:18:00.877 "superblock": true, 00:18:00.877 "num_base_bdevs": 2, 00:18:00.877 "num_base_bdevs_discovered": 0, 00:18:00.877 "num_base_bdevs_operational": 2, 00:18:00.877 "base_bdevs_list": [ 00:18:00.877 { 00:18:00.877 "name": "BaseBdev1", 00:18:00.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.877 "is_configured": false, 00:18:00.877 "data_offset": 0, 00:18:00.877 "data_size": 0 00:18:00.877 }, 00:18:00.877 { 00:18:00.877 "name": "BaseBdev2", 00:18:00.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.877 "is_configured": false, 00:18:00.877 "data_offset": 0, 00:18:00.877 "data_size": 0 00:18:00.877 } 00:18:00.877 ] 00:18:00.877 }' 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.877 17:10:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 [2024-11-20 17:10:25.172109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.445 [2024-11-20 17:10:25.172328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 [2024-11-20 17:10:25.180083] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.445 [2024-11-20 17:10:25.180142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.445 [2024-11-20 17:10:25.180170] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.445 [2024-11-20 17:10:25.180186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 [2024-11-20 17:10:25.224954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.445 BaseBdev1 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.445 [ 00:18:01.445 { 00:18:01.445 "name": "BaseBdev1", 00:18:01.445 "aliases": [ 00:18:01.445 "27977217-65b3-4b4d-8f92-c1a62f2dac9a" 00:18:01.445 ], 00:18:01.445 "product_name": "Malloc disk", 00:18:01.445 "block_size": 4096, 00:18:01.445 "num_blocks": 8192, 00:18:01.445 "uuid": "27977217-65b3-4b4d-8f92-c1a62f2dac9a", 00:18:01.445 "md_size": 32, 00:18:01.445 "md_interleave": false, 00:18:01.445 "dif_type": 0, 00:18:01.445 "assigned_rate_limits": { 00:18:01.445 "rw_ios_per_sec": 0, 00:18:01.445 "rw_mbytes_per_sec": 0, 00:18:01.445 "r_mbytes_per_sec": 0, 00:18:01.445 "w_mbytes_per_sec": 0 00:18:01.445 }, 00:18:01.445 "claimed": true, 00:18:01.445 "claim_type": "exclusive_write", 00:18:01.445 "zoned": false, 00:18:01.445 "supported_io_types": { 00:18:01.445 "read": true, 00:18:01.445 "write": true, 00:18:01.445 "unmap": true, 00:18:01.445 "flush": true, 00:18:01.445 "reset": true, 00:18:01.445 "nvme_admin": false, 00:18:01.445 "nvme_io": false, 00:18:01.445 "nvme_io_md": false, 00:18:01.445 "write_zeroes": true, 00:18:01.445 "zcopy": true, 00:18:01.445 "get_zone_info": false, 00:18:01.445 "zone_management": false, 00:18:01.445 "zone_append": false, 00:18:01.445 "compare": false, 00:18:01.445 "compare_and_write": false, 00:18:01.445 "abort": true, 00:18:01.445 "seek_hole": false, 00:18:01.445 "seek_data": false, 00:18:01.445 "copy": true, 00:18:01.445 "nvme_iov_md": false 00:18:01.445 }, 00:18:01.445 "memory_domains": [ 00:18:01.445 { 00:18:01.445 "dma_device_id": "system", 00:18:01.445 "dma_device_type": 1 00:18:01.445 }, 00:18:01.445 { 00:18:01.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.445 "dma_device_type": 2 00:18:01.445 } 00:18:01.445 ], 00:18:01.445 "driver_specific": {} 00:18:01.445 } 00:18:01.445 ] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.445 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.446 "name": "Existed_Raid", 00:18:01.446 "uuid": "ad32a07c-0ceb-43f4-83b6-bf4856f7ed68", 00:18:01.446 "strip_size_kb": 0, 00:18:01.446 "state": "configuring", 00:18:01.446 "raid_level": "raid1", 00:18:01.446 "superblock": true, 00:18:01.446 "num_base_bdevs": 2, 00:18:01.446 "num_base_bdevs_discovered": 1, 00:18:01.446 "num_base_bdevs_operational": 2, 00:18:01.446 "base_bdevs_list": [ 00:18:01.446 { 00:18:01.446 "name": "BaseBdev1", 00:18:01.446 "uuid": "27977217-65b3-4b4d-8f92-c1a62f2dac9a", 00:18:01.446 "is_configured": true, 00:18:01.446 "data_offset": 256, 00:18:01.446 "data_size": 7936 00:18:01.446 }, 00:18:01.446 { 00:18:01.446 "name": "BaseBdev2", 00:18:01.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.446 "is_configured": false, 00:18:01.446 "data_offset": 0, 00:18:01.446 "data_size": 0 00:18:01.446 } 00:18:01.446 ] 00:18:01.446 }' 00:18:01.446 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.705 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 [2024-11-20 17:10:25.785275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.964 [2024-11-20 17:10:25.785331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 [2024-11-20 17:10:25.793270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.964 [2024-11-20 17:10:25.796246] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.964 [2024-11-20 17:10:25.796294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.964 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.223 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.223 "name": "Existed_Raid", 00:18:02.223 "uuid": "32b32f96-94d7-43ac-8146-1d72936ea606", 00:18:02.223 "strip_size_kb": 0, 00:18:02.223 "state": "configuring", 00:18:02.223 "raid_level": "raid1", 00:18:02.223 "superblock": true, 00:18:02.223 "num_base_bdevs": 2, 00:18:02.223 "num_base_bdevs_discovered": 1, 00:18:02.223 "num_base_bdevs_operational": 2, 00:18:02.223 "base_bdevs_list": [ 00:18:02.223 { 00:18:02.223 "name": "BaseBdev1", 00:18:02.223 "uuid": "27977217-65b3-4b4d-8f92-c1a62f2dac9a", 00:18:02.223 "is_configured": true, 00:18:02.223 "data_offset": 256, 00:18:02.223 "data_size": 7936 00:18:02.223 }, 00:18:02.223 { 00:18:02.223 "name": "BaseBdev2", 00:18:02.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.223 "is_configured": false, 00:18:02.223 "data_offset": 0, 00:18:02.223 "data_size": 0 00:18:02.223 } 00:18:02.223 ] 00:18:02.223 }' 00:18:02.223 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.223 17:10:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.481 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:02.482 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.482 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.740 [2024-11-20 17:10:26.359382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.740 [2024-11-20 17:10:26.359727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:02.740 [2024-11-20 17:10:26.359751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.740 [2024-11-20 17:10:26.359893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:02.740 [2024-11-20 17:10:26.360088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:02.740 [2024-11-20 17:10:26.360107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:02.740 [2024-11-20 17:10:26.360260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.740 BaseBdev2 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.740 [ 00:18:02.740 { 00:18:02.740 "name": "BaseBdev2", 00:18:02.740 "aliases": [ 00:18:02.740 "1771461c-659a-42d2-be7b-95ae8c69c9c5" 00:18:02.740 ], 00:18:02.740 "product_name": "Malloc disk", 00:18:02.740 "block_size": 4096, 00:18:02.740 "num_blocks": 8192, 00:18:02.740 "uuid": "1771461c-659a-42d2-be7b-95ae8c69c9c5", 00:18:02.740 "md_size": 32, 00:18:02.740 "md_interleave": false, 00:18:02.740 "dif_type": 0, 00:18:02.740 "assigned_rate_limits": { 00:18:02.740 "rw_ios_per_sec": 0, 00:18:02.740 "rw_mbytes_per_sec": 0, 00:18:02.740 "r_mbytes_per_sec": 0, 00:18:02.740 "w_mbytes_per_sec": 0 00:18:02.740 }, 00:18:02.740 "claimed": true, 00:18:02.740 "claim_type": "exclusive_write", 00:18:02.740 "zoned": false, 00:18:02.740 "supported_io_types": { 00:18:02.740 "read": true, 00:18:02.740 "write": true, 00:18:02.740 "unmap": true, 00:18:02.740 "flush": true, 00:18:02.740 "reset": true, 00:18:02.740 "nvme_admin": false, 00:18:02.740 "nvme_io": false, 00:18:02.740 "nvme_io_md": false, 00:18:02.740 "write_zeroes": true, 00:18:02.740 "zcopy": true, 00:18:02.740 "get_zone_info": false, 00:18:02.740 "zone_management": false, 00:18:02.740 "zone_append": false, 00:18:02.740 "compare": false, 00:18:02.740 "compare_and_write": false, 00:18:02.740 "abort": true, 00:18:02.740 "seek_hole": false, 00:18:02.740 "seek_data": false, 00:18:02.740 "copy": true, 00:18:02.740 "nvme_iov_md": false 00:18:02.740 }, 00:18:02.740 "memory_domains": [ 00:18:02.740 { 00:18:02.740 "dma_device_id": "system", 00:18:02.740 "dma_device_type": 1 00:18:02.740 }, 00:18:02.740 { 00:18:02.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.740 "dma_device_type": 2 00:18:02.740 } 00:18:02.740 ], 00:18:02.740 "driver_specific": {} 00:18:02.740 } 00:18:02.740 ] 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.740 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.741 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.741 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.741 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.741 "name": "Existed_Raid", 00:18:02.741 "uuid": "32b32f96-94d7-43ac-8146-1d72936ea606", 00:18:02.741 "strip_size_kb": 0, 00:18:02.741 "state": "online", 00:18:02.741 "raid_level": "raid1", 00:18:02.741 "superblock": true, 00:18:02.741 "num_base_bdevs": 2, 00:18:02.741 "num_base_bdevs_discovered": 2, 00:18:02.741 "num_base_bdevs_operational": 2, 00:18:02.741 "base_bdevs_list": [ 00:18:02.741 { 00:18:02.741 "name": "BaseBdev1", 00:18:02.741 "uuid": "27977217-65b3-4b4d-8f92-c1a62f2dac9a", 00:18:02.741 "is_configured": true, 00:18:02.741 "data_offset": 256, 00:18:02.741 "data_size": 7936 00:18:02.741 }, 00:18:02.741 { 00:18:02.741 "name": "BaseBdev2", 00:18:02.741 "uuid": "1771461c-659a-42d2-be7b-95ae8c69c9c5", 00:18:02.741 "is_configured": true, 00:18:02.741 "data_offset": 256, 00:18:02.741 "data_size": 7936 00:18:02.741 } 00:18:02.741 ] 00:18:02.741 }' 00:18:02.741 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.741 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.308 [2024-11-20 17:10:26.928007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.308 "name": "Existed_Raid", 00:18:03.308 "aliases": [ 00:18:03.308 "32b32f96-94d7-43ac-8146-1d72936ea606" 00:18:03.308 ], 00:18:03.308 "product_name": "Raid Volume", 00:18:03.308 "block_size": 4096, 00:18:03.308 "num_blocks": 7936, 00:18:03.308 "uuid": "32b32f96-94d7-43ac-8146-1d72936ea606", 00:18:03.308 "md_size": 32, 00:18:03.308 "md_interleave": false, 00:18:03.308 "dif_type": 0, 00:18:03.308 "assigned_rate_limits": { 00:18:03.308 "rw_ios_per_sec": 0, 00:18:03.308 "rw_mbytes_per_sec": 0, 00:18:03.308 "r_mbytes_per_sec": 0, 00:18:03.308 "w_mbytes_per_sec": 0 00:18:03.308 }, 00:18:03.308 "claimed": false, 00:18:03.308 "zoned": false, 00:18:03.308 "supported_io_types": { 00:18:03.308 "read": true, 00:18:03.308 "write": true, 00:18:03.308 "unmap": false, 00:18:03.308 "flush": false, 00:18:03.308 "reset": true, 00:18:03.308 "nvme_admin": false, 00:18:03.308 "nvme_io": false, 00:18:03.308 "nvme_io_md": false, 00:18:03.308 "write_zeroes": true, 00:18:03.308 "zcopy": false, 00:18:03.308 "get_zone_info": false, 00:18:03.308 "zone_management": false, 00:18:03.308 "zone_append": false, 00:18:03.308 "compare": false, 00:18:03.308 "compare_and_write": false, 00:18:03.308 "abort": false, 00:18:03.308 "seek_hole": false, 00:18:03.308 "seek_data": false, 00:18:03.308 "copy": false, 00:18:03.308 "nvme_iov_md": false 00:18:03.308 }, 00:18:03.308 "memory_domains": [ 00:18:03.308 { 00:18:03.308 "dma_device_id": "system", 00:18:03.308 "dma_device_type": 1 00:18:03.308 }, 00:18:03.308 { 00:18:03.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.308 "dma_device_type": 2 00:18:03.308 }, 00:18:03.308 { 00:18:03.308 "dma_device_id": "system", 00:18:03.308 "dma_device_type": 1 00:18:03.308 }, 00:18:03.308 { 00:18:03.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.308 "dma_device_type": 2 00:18:03.308 } 00:18:03.308 ], 00:18:03.308 "driver_specific": { 00:18:03.308 "raid": { 00:18:03.308 "uuid": "32b32f96-94d7-43ac-8146-1d72936ea606", 00:18:03.308 "strip_size_kb": 0, 00:18:03.308 "state": "online", 00:18:03.308 "raid_level": "raid1", 00:18:03.308 "superblock": true, 00:18:03.308 "num_base_bdevs": 2, 00:18:03.308 "num_base_bdevs_discovered": 2, 00:18:03.308 "num_base_bdevs_operational": 2, 00:18:03.308 "base_bdevs_list": [ 00:18:03.308 { 00:18:03.308 "name": "BaseBdev1", 00:18:03.308 "uuid": "27977217-65b3-4b4d-8f92-c1a62f2dac9a", 00:18:03.308 "is_configured": true, 00:18:03.308 "data_offset": 256, 00:18:03.308 "data_size": 7936 00:18:03.308 }, 00:18:03.308 { 00:18:03.308 "name": "BaseBdev2", 00:18:03.308 "uuid": "1771461c-659a-42d2-be7b-95ae8c69c9c5", 00:18:03.308 "is_configured": true, 00:18:03.308 "data_offset": 256, 00:18:03.308 "data_size": 7936 00:18:03.308 } 00:18:03.308 ] 00:18:03.308 } 00:18:03.308 } 00:18:03.308 }' 00:18:03.308 17:10:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:03.308 BaseBdev2' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.308 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.567 [2024-11-20 17:10:27.195712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.567 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.568 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.568 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.568 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.568 "name": "Existed_Raid", 00:18:03.568 "uuid": "32b32f96-94d7-43ac-8146-1d72936ea606", 00:18:03.568 "strip_size_kb": 0, 00:18:03.568 "state": "online", 00:18:03.568 "raid_level": "raid1", 00:18:03.568 "superblock": true, 00:18:03.568 "num_base_bdevs": 2, 00:18:03.568 "num_base_bdevs_discovered": 1, 00:18:03.568 "num_base_bdevs_operational": 1, 00:18:03.568 "base_bdevs_list": [ 00:18:03.568 { 00:18:03.568 "name": null, 00:18:03.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.568 "is_configured": false, 00:18:03.568 "data_offset": 0, 00:18:03.568 "data_size": 7936 00:18:03.568 }, 00:18:03.568 { 00:18:03.568 "name": "BaseBdev2", 00:18:03.568 "uuid": "1771461c-659a-42d2-be7b-95ae8c69c9c5", 00:18:03.568 "is_configured": true, 00:18:03.568 "data_offset": 256, 00:18:03.568 "data_size": 7936 00:18:03.568 } 00:18:03.568 ] 00:18:03.568 }' 00:18:03.568 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.568 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.135 [2024-11-20 17:10:27.900947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.135 [2024-11-20 17:10:27.901075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.135 [2024-11-20 17:10:27.986727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.135 [2024-11-20 17:10:27.986824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.135 [2024-11-20 17:10:27.986860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.135 17:10:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87405 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87405 ']' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87405 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87405 00:18:04.394 killing process with pid 87405 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87405' 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87405 00:18:04.394 [2024-11-20 17:10:28.070091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.394 17:10:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87405 00:18:04.394 [2024-11-20 17:10:28.085715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.347 17:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:05.347 00:18:05.347 real 0m5.472s 00:18:05.347 user 0m8.233s 00:18:05.347 sys 0m0.824s 00:18:05.347 17:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.347 17:10:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.347 ************************************ 00:18:05.347 END TEST raid_state_function_test_sb_md_separate 00:18:05.347 ************************************ 00:18:05.347 17:10:29 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:05.347 17:10:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:05.347 17:10:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.347 17:10:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.347 ************************************ 00:18:05.347 START TEST raid_superblock_test_md_separate 00:18:05.347 ************************************ 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87658 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87658 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87658 ']' 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.347 17:10:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.605 [2024-11-20 17:10:29.236590] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:05.605 [2024-11-20 17:10:29.236995] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87658 ] 00:18:05.605 [2024-11-20 17:10:29.416827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.862 [2024-11-20 17:10:29.539378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.862 [2024-11-20 17:10:29.729286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.862 [2024-11-20 17:10:29.729365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.430 malloc1 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.430 [2024-11-20 17:10:30.212362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.430 [2024-11-20 17:10:30.212588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.430 [2024-11-20 17:10:30.212663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.430 [2024-11-20 17:10:30.212845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.430 [2024-11-20 17:10:30.215502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.430 [2024-11-20 17:10:30.215708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.430 pt1 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.430 malloc2 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.430 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.430 [2024-11-20 17:10:30.269493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.430 [2024-11-20 17:10:30.269582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.430 [2024-11-20 17:10:30.269611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.430 [2024-11-20 17:10:30.269624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.431 [2024-11-20 17:10:30.272317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.431 [2024-11-20 17:10:30.272507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.431 pt2 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.431 [2024-11-20 17:10:30.281511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.431 [2024-11-20 17:10:30.283997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.431 [2024-11-20 17:10:30.284268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.431 [2024-11-20 17:10:30.284287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.431 [2024-11-20 17:10:30.284364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.431 [2024-11-20 17:10:30.284503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.431 [2024-11-20 17:10:30.284520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.431 [2024-11-20 17:10:30.284629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.431 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.689 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.689 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.689 "name": "raid_bdev1", 00:18:06.689 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:06.689 "strip_size_kb": 0, 00:18:06.689 "state": "online", 00:18:06.689 "raid_level": "raid1", 00:18:06.689 "superblock": true, 00:18:06.689 "num_base_bdevs": 2, 00:18:06.689 "num_base_bdevs_discovered": 2, 00:18:06.689 "num_base_bdevs_operational": 2, 00:18:06.689 "base_bdevs_list": [ 00:18:06.689 { 00:18:06.689 "name": "pt1", 00:18:06.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.689 "is_configured": true, 00:18:06.689 "data_offset": 256, 00:18:06.689 "data_size": 7936 00:18:06.689 }, 00:18:06.689 { 00:18:06.689 "name": "pt2", 00:18:06.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.689 "is_configured": true, 00:18:06.689 "data_offset": 256, 00:18:06.689 "data_size": 7936 00:18:06.689 } 00:18:06.689 ] 00:18:06.689 }' 00:18:06.689 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.689 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.948 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.948 [2024-11-20 17:10:30.809991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.207 "name": "raid_bdev1", 00:18:07.207 "aliases": [ 00:18:07.207 "f0319cf8-0086-4e2e-aa63-4a21bfed0084" 00:18:07.207 ], 00:18:07.207 "product_name": "Raid Volume", 00:18:07.207 "block_size": 4096, 00:18:07.207 "num_blocks": 7936, 00:18:07.207 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:07.207 "md_size": 32, 00:18:07.207 "md_interleave": false, 00:18:07.207 "dif_type": 0, 00:18:07.207 "assigned_rate_limits": { 00:18:07.207 "rw_ios_per_sec": 0, 00:18:07.207 "rw_mbytes_per_sec": 0, 00:18:07.207 "r_mbytes_per_sec": 0, 00:18:07.207 "w_mbytes_per_sec": 0 00:18:07.207 }, 00:18:07.207 "claimed": false, 00:18:07.207 "zoned": false, 00:18:07.207 "supported_io_types": { 00:18:07.207 "read": true, 00:18:07.207 "write": true, 00:18:07.207 "unmap": false, 00:18:07.207 "flush": false, 00:18:07.207 "reset": true, 00:18:07.207 "nvme_admin": false, 00:18:07.207 "nvme_io": false, 00:18:07.207 "nvme_io_md": false, 00:18:07.207 "write_zeroes": true, 00:18:07.207 "zcopy": false, 00:18:07.207 "get_zone_info": false, 00:18:07.207 "zone_management": false, 00:18:07.207 "zone_append": false, 00:18:07.207 "compare": false, 00:18:07.207 "compare_and_write": false, 00:18:07.207 "abort": false, 00:18:07.207 "seek_hole": false, 00:18:07.207 "seek_data": false, 00:18:07.207 "copy": false, 00:18:07.207 "nvme_iov_md": false 00:18:07.207 }, 00:18:07.207 "memory_domains": [ 00:18:07.207 { 00:18:07.207 "dma_device_id": "system", 00:18:07.207 "dma_device_type": 1 00:18:07.207 }, 00:18:07.207 { 00:18:07.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.207 "dma_device_type": 2 00:18:07.207 }, 00:18:07.207 { 00:18:07.207 "dma_device_id": "system", 00:18:07.207 "dma_device_type": 1 00:18:07.207 }, 00:18:07.207 { 00:18:07.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.207 "dma_device_type": 2 00:18:07.207 } 00:18:07.207 ], 00:18:07.207 "driver_specific": { 00:18:07.207 "raid": { 00:18:07.207 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:07.207 "strip_size_kb": 0, 00:18:07.207 "state": "online", 00:18:07.207 "raid_level": "raid1", 00:18:07.207 "superblock": true, 00:18:07.207 "num_base_bdevs": 2, 00:18:07.207 "num_base_bdevs_discovered": 2, 00:18:07.207 "num_base_bdevs_operational": 2, 00:18:07.207 "base_bdevs_list": [ 00:18:07.207 { 00:18:07.207 "name": "pt1", 00:18:07.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.207 "is_configured": true, 00:18:07.207 "data_offset": 256, 00:18:07.207 "data_size": 7936 00:18:07.207 }, 00:18:07.207 { 00:18:07.207 "name": "pt2", 00:18:07.207 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.207 "is_configured": true, 00:18:07.207 "data_offset": 256, 00:18:07.207 "data_size": 7936 00:18:07.207 } 00:18:07.207 ] 00:18:07.207 } 00:18:07.207 } 00:18:07.207 }' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:07.207 pt2' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.207 17:10:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.207 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:07.466 [2024-11-20 17:10:31.078084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.466 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.466 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f0319cf8-0086-4e2e-aa63-4a21bfed0084 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f0319cf8-0086-4e2e-aa63-4a21bfed0084 ']' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 [2024-11-20 17:10:31.129665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.467 [2024-11-20 17:10:31.129689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.467 [2024-11-20 17:10:31.129818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.467 [2024-11-20 17:10:31.129901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.467 [2024-11-20 17:10:31.129918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 [2024-11-20 17:10:31.269707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:07.467 [2024-11-20 17:10:31.272306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:07.467 [2024-11-20 17:10:31.272500] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:07.467 [2024-11-20 17:10:31.272607] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:07.467 [2024-11-20 17:10:31.272631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.467 [2024-11-20 17:10:31.272645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:07.467 request: 00:18:07.467 { 00:18:07.467 "name": "raid_bdev1", 00:18:07.467 "raid_level": "raid1", 00:18:07.467 "base_bdevs": [ 00:18:07.467 "malloc1", 00:18:07.467 "malloc2" 00:18:07.467 ], 00:18:07.467 "superblock": false, 00:18:07.467 "method": "bdev_raid_create", 00:18:07.467 "req_id": 1 00:18:07.467 } 00:18:07.467 Got JSON-RPC error response 00:18:07.467 response: 00:18:07.467 { 00:18:07.467 "code": -17, 00:18:07.467 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:07.467 } 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.467 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.726 [2024-11-20 17:10:31.337717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.726 [2024-11-20 17:10:31.337816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.726 [2024-11-20 17:10:31.337843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:07.726 [2024-11-20 17:10:31.337873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.726 [2024-11-20 17:10:31.340494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.726 [2024-11-20 17:10:31.340552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.726 [2024-11-20 17:10:31.340601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.726 [2024-11-20 17:10:31.340659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.726 pt1 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.726 "name": "raid_bdev1", 00:18:07.726 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:07.726 "strip_size_kb": 0, 00:18:07.726 "state": "configuring", 00:18:07.726 "raid_level": "raid1", 00:18:07.726 "superblock": true, 00:18:07.726 "num_base_bdevs": 2, 00:18:07.726 "num_base_bdevs_discovered": 1, 00:18:07.726 "num_base_bdevs_operational": 2, 00:18:07.726 "base_bdevs_list": [ 00:18:07.726 { 00:18:07.726 "name": "pt1", 00:18:07.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.726 "is_configured": true, 00:18:07.726 "data_offset": 256, 00:18:07.726 "data_size": 7936 00:18:07.726 }, 00:18:07.726 { 00:18:07.726 "name": null, 00:18:07.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.726 "is_configured": false, 00:18:07.726 "data_offset": 256, 00:18:07.726 "data_size": 7936 00:18:07.726 } 00:18:07.726 ] 00:18:07.726 }' 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.726 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.292 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.292 [2024-11-20 17:10:31.857968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.292 [2024-11-20 17:10:31.858057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.292 [2024-11-20 17:10:31.858103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:08.292 [2024-11-20 17:10:31.858150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.292 [2024-11-20 17:10:31.858429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.292 [2024-11-20 17:10:31.858458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.292 [2024-11-20 17:10:31.858533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.292 [2024-11-20 17:10:31.858564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.293 [2024-11-20 17:10:31.858702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:08.293 [2024-11-20 17:10:31.858719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.293 [2024-11-20 17:10:31.858853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:08.293 [2024-11-20 17:10:31.859033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:08.293 [2024-11-20 17:10:31.859049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:08.293 [2024-11-20 17:10:31.859200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.293 pt2 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.293 "name": "raid_bdev1", 00:18:08.293 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:08.293 "strip_size_kb": 0, 00:18:08.293 "state": "online", 00:18:08.293 "raid_level": "raid1", 00:18:08.293 "superblock": true, 00:18:08.293 "num_base_bdevs": 2, 00:18:08.293 "num_base_bdevs_discovered": 2, 00:18:08.293 "num_base_bdevs_operational": 2, 00:18:08.293 "base_bdevs_list": [ 00:18:08.293 { 00:18:08.293 "name": "pt1", 00:18:08.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.293 "is_configured": true, 00:18:08.293 "data_offset": 256, 00:18:08.293 "data_size": 7936 00:18:08.293 }, 00:18:08.293 { 00:18:08.293 "name": "pt2", 00:18:08.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.293 "is_configured": true, 00:18:08.293 "data_offset": 256, 00:18:08.293 "data_size": 7936 00:18:08.293 } 00:18:08.293 ] 00:18:08.293 }' 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.293 17:10:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.551 [2024-11-20 17:10:32.366423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.551 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:08.551 "name": "raid_bdev1", 00:18:08.551 "aliases": [ 00:18:08.551 "f0319cf8-0086-4e2e-aa63-4a21bfed0084" 00:18:08.551 ], 00:18:08.551 "product_name": "Raid Volume", 00:18:08.551 "block_size": 4096, 00:18:08.551 "num_blocks": 7936, 00:18:08.551 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:08.551 "md_size": 32, 00:18:08.551 "md_interleave": false, 00:18:08.551 "dif_type": 0, 00:18:08.551 "assigned_rate_limits": { 00:18:08.551 "rw_ios_per_sec": 0, 00:18:08.551 "rw_mbytes_per_sec": 0, 00:18:08.552 "r_mbytes_per_sec": 0, 00:18:08.552 "w_mbytes_per_sec": 0 00:18:08.552 }, 00:18:08.552 "claimed": false, 00:18:08.552 "zoned": false, 00:18:08.552 "supported_io_types": { 00:18:08.552 "read": true, 00:18:08.552 "write": true, 00:18:08.552 "unmap": false, 00:18:08.552 "flush": false, 00:18:08.552 "reset": true, 00:18:08.552 "nvme_admin": false, 00:18:08.552 "nvme_io": false, 00:18:08.552 "nvme_io_md": false, 00:18:08.552 "write_zeroes": true, 00:18:08.552 "zcopy": false, 00:18:08.552 "get_zone_info": false, 00:18:08.552 "zone_management": false, 00:18:08.552 "zone_append": false, 00:18:08.552 "compare": false, 00:18:08.552 "compare_and_write": false, 00:18:08.552 "abort": false, 00:18:08.552 "seek_hole": false, 00:18:08.552 "seek_data": false, 00:18:08.552 "copy": false, 00:18:08.552 "nvme_iov_md": false 00:18:08.552 }, 00:18:08.552 "memory_domains": [ 00:18:08.552 { 00:18:08.552 "dma_device_id": "system", 00:18:08.552 "dma_device_type": 1 00:18:08.552 }, 00:18:08.552 { 00:18:08.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.552 "dma_device_type": 2 00:18:08.552 }, 00:18:08.552 { 00:18:08.552 "dma_device_id": "system", 00:18:08.552 "dma_device_type": 1 00:18:08.552 }, 00:18:08.552 { 00:18:08.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.552 "dma_device_type": 2 00:18:08.552 } 00:18:08.552 ], 00:18:08.552 "driver_specific": { 00:18:08.552 "raid": { 00:18:08.552 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:08.552 "strip_size_kb": 0, 00:18:08.552 "state": "online", 00:18:08.552 "raid_level": "raid1", 00:18:08.552 "superblock": true, 00:18:08.552 "num_base_bdevs": 2, 00:18:08.552 "num_base_bdevs_discovered": 2, 00:18:08.552 "num_base_bdevs_operational": 2, 00:18:08.552 "base_bdevs_list": [ 00:18:08.552 { 00:18:08.552 "name": "pt1", 00:18:08.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.552 "is_configured": true, 00:18:08.552 "data_offset": 256, 00:18:08.552 "data_size": 7936 00:18:08.552 }, 00:18:08.552 { 00:18:08.552 "name": "pt2", 00:18:08.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.552 "is_configured": true, 00:18:08.552 "data_offset": 256, 00:18:08.552 "data_size": 7936 00:18:08.552 } 00:18:08.552 ] 00:18:08.552 } 00:18:08.552 } 00:18:08.552 }' 00:18:08.552 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:08.810 pt2' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.810 [2024-11-20 17:10:32.634479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.810 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f0319cf8-0086-4e2e-aa63-4a21bfed0084 '!=' f0319cf8-0086-4e2e-aa63-4a21bfed0084 ']' 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.069 [2024-11-20 17:10:32.686217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.069 "name": "raid_bdev1", 00:18:09.069 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:09.069 "strip_size_kb": 0, 00:18:09.069 "state": "online", 00:18:09.069 "raid_level": "raid1", 00:18:09.069 "superblock": true, 00:18:09.069 "num_base_bdevs": 2, 00:18:09.069 "num_base_bdevs_discovered": 1, 00:18:09.069 "num_base_bdevs_operational": 1, 00:18:09.069 "base_bdevs_list": [ 00:18:09.069 { 00:18:09.069 "name": null, 00:18:09.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.069 "is_configured": false, 00:18:09.069 "data_offset": 0, 00:18:09.069 "data_size": 7936 00:18:09.069 }, 00:18:09.069 { 00:18:09.069 "name": "pt2", 00:18:09.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.069 "is_configured": true, 00:18:09.069 "data_offset": 256, 00:18:09.069 "data_size": 7936 00:18:09.069 } 00:18:09.069 ] 00:18:09.069 }' 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.069 17:10:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 [2024-11-20 17:10:33.222385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.637 [2024-11-20 17:10:33.222416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.637 [2024-11-20 17:10:33.222499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.637 [2024-11-20 17:10:33.222571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.637 [2024-11-20 17:10:33.222601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 [2024-11-20 17:10:33.302392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.637 [2024-11-20 17:10:33.302464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.637 [2024-11-20 17:10:33.302485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:09.637 [2024-11-20 17:10:33.302514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.637 [2024-11-20 17:10:33.305209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.637 [2024-11-20 17:10:33.305267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.637 [2024-11-20 17:10:33.305342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:09.637 [2024-11-20 17:10:33.305396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.637 [2024-11-20 17:10:33.305532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:09.637 [2024-11-20 17:10:33.305551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.637 [2024-11-20 17:10:33.305632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:09.637 [2024-11-20 17:10:33.305814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:09.637 [2024-11-20 17:10:33.305834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:09.637 [2024-11-20 17:10:33.305971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.637 pt2 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.637 "name": "raid_bdev1", 00:18:09.637 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:09.637 "strip_size_kb": 0, 00:18:09.637 "state": "online", 00:18:09.637 "raid_level": "raid1", 00:18:09.637 "superblock": true, 00:18:09.637 "num_base_bdevs": 2, 00:18:09.637 "num_base_bdevs_discovered": 1, 00:18:09.637 "num_base_bdevs_operational": 1, 00:18:09.637 "base_bdevs_list": [ 00:18:09.637 { 00:18:09.637 "name": null, 00:18:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.637 "is_configured": false, 00:18:09.637 "data_offset": 256, 00:18:09.637 "data_size": 7936 00:18:09.637 }, 00:18:09.637 { 00:18:09.637 "name": "pt2", 00:18:09.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.637 "is_configured": true, 00:18:09.637 "data_offset": 256, 00:18:09.637 "data_size": 7936 00:18:09.637 } 00:18:09.637 ] 00:18:09.638 }' 00:18:09.638 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.638 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.205 [2024-11-20 17:10:33.826515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.205 [2024-11-20 17:10:33.826547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.205 [2024-11-20 17:10:33.826619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.205 [2024-11-20 17:10:33.826678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.205 [2024-11-20 17:10:33.826691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.205 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.205 [2024-11-20 17:10:33.890554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.205 [2024-11-20 17:10:33.890622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.205 [2024-11-20 17:10:33.890648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:10.205 [2024-11-20 17:10:33.890660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.205 [2024-11-20 17:10:33.893340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.205 [2024-11-20 17:10:33.893378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.205 [2024-11-20 17:10:33.893458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:10.205 [2024-11-20 17:10:33.893521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.205 [2024-11-20 17:10:33.893660] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:10.205 [2024-11-20 17:10:33.893675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.205 [2024-11-20 17:10:33.893694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:10.205 [2024-11-20 17:10:33.893778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.205 [2024-11-20 17:10:33.893898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:10.205 [2024-11-20 17:10:33.893913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.205 [2024-11-20 17:10:33.894017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.206 [2024-11-20 17:10:33.894176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:10.206 [2024-11-20 17:10:33.894199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:10.206 [2024-11-20 17:10:33.894332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.206 pt1 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.206 "name": "raid_bdev1", 00:18:10.206 "uuid": "f0319cf8-0086-4e2e-aa63-4a21bfed0084", 00:18:10.206 "strip_size_kb": 0, 00:18:10.206 "state": "online", 00:18:10.206 "raid_level": "raid1", 00:18:10.206 "superblock": true, 00:18:10.206 "num_base_bdevs": 2, 00:18:10.206 "num_base_bdevs_discovered": 1, 00:18:10.206 "num_base_bdevs_operational": 1, 00:18:10.206 "base_bdevs_list": [ 00:18:10.206 { 00:18:10.206 "name": null, 00:18:10.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.206 "is_configured": false, 00:18:10.206 "data_offset": 256, 00:18:10.206 "data_size": 7936 00:18:10.206 }, 00:18:10.206 { 00:18:10.206 "name": "pt2", 00:18:10.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.206 "is_configured": true, 00:18:10.206 "data_offset": 256, 00:18:10.206 "data_size": 7936 00:18:10.206 } 00:18:10.206 ] 00:18:10.206 }' 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.206 17:10:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.772 [2024-11-20 17:10:34.459091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f0319cf8-0086-4e2e-aa63-4a21bfed0084 '!=' f0319cf8-0086-4e2e-aa63-4a21bfed0084 ']' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87658 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87658 ']' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87658 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87658 00:18:10.772 killing process with pid 87658 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87658' 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87658 00:18:10.772 [2024-11-20 17:10:34.539315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.772 [2024-11-20 17:10:34.539395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.772 17:10:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87658 00:18:10.772 [2024-11-20 17:10:34.539458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.772 [2024-11-20 17:10:34.539479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:11.031 [2024-11-20 17:10:34.702940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.967 17:10:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:11.967 ************************************ 00:18:11.967 END TEST raid_superblock_test_md_separate 00:18:11.967 ************************************ 00:18:11.967 00:18:11.967 real 0m6.501s 00:18:11.967 user 0m10.350s 00:18:11.967 sys 0m0.976s 00:18:11.967 17:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.967 17:10:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.967 17:10:35 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:11.967 17:10:35 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:11.967 17:10:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:11.968 17:10:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.968 17:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.968 ************************************ 00:18:11.968 START TEST raid_rebuild_test_sb_md_separate 00:18:11.968 ************************************ 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87987 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87987 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87987 ']' 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.968 17:10:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.968 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.968 Zero copy mechanism will not be used. 00:18:11.968 [2024-11-20 17:10:35.804957] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:11.968 [2024-11-20 17:10:35.805093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87987 ] 00:18:12.226 [2024-11-20 17:10:35.978430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.485 [2024-11-20 17:10:36.104273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.485 [2024-11-20 17:10:36.307695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.485 [2024-11-20 17:10:36.307779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.053 BaseBdev1_malloc 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:13.053 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.054 [2024-11-20 17:10:36.864941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:13.054 [2024-11-20 17:10:36.865021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.054 [2024-11-20 17:10:36.865051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.054 [2024-11-20 17:10:36.865077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.054 [2024-11-20 17:10:36.867711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.054 [2024-11-20 17:10:36.867769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:13.054 BaseBdev1 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.054 BaseBdev2_malloc 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.054 [2024-11-20 17:10:36.912584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:13.054 [2024-11-20 17:10:36.912671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.054 [2024-11-20 17:10:36.912698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:13.054 [2024-11-20 17:10:36.912715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.054 [2024-11-20 17:10:36.915061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.054 [2024-11-20 17:10:36.915153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:13.054 BaseBdev2 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.054 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.313 spare_malloc 00:18:13.313 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.313 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.314 spare_delay 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.314 [2024-11-20 17:10:36.978285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.314 [2024-11-20 17:10:36.978392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.314 [2024-11-20 17:10:36.978422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:13.314 [2024-11-20 17:10:36.978439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.314 [2024-11-20 17:10:36.981196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.314 [2024-11-20 17:10:36.981257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.314 spare 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.314 [2024-11-20 17:10:36.986339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.314 [2024-11-20 17:10:36.989034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.314 [2024-11-20 17:10:36.989272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:13.314 [2024-11-20 17:10:36.989294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.314 [2024-11-20 17:10:36.989393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:13.314 [2024-11-20 17:10:36.989552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:13.314 [2024-11-20 17:10:36.989568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:13.314 [2024-11-20 17:10:36.989685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.314 17:10:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.314 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.314 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.314 "name": "raid_bdev1", 00:18:13.314 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:13.314 "strip_size_kb": 0, 00:18:13.314 "state": "online", 00:18:13.314 "raid_level": "raid1", 00:18:13.314 "superblock": true, 00:18:13.314 "num_base_bdevs": 2, 00:18:13.314 "num_base_bdevs_discovered": 2, 00:18:13.314 "num_base_bdevs_operational": 2, 00:18:13.314 "base_bdevs_list": [ 00:18:13.314 { 00:18:13.314 "name": "BaseBdev1", 00:18:13.314 "uuid": "1b53a401-46ac-51d4-b5ce-46675fde6cc1", 00:18:13.314 "is_configured": true, 00:18:13.314 "data_offset": 256, 00:18:13.314 "data_size": 7936 00:18:13.314 }, 00:18:13.314 { 00:18:13.314 "name": "BaseBdev2", 00:18:13.314 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:13.314 "is_configured": true, 00:18:13.314 "data_offset": 256, 00:18:13.314 "data_size": 7936 00:18:13.314 } 00:18:13.314 ] 00:18:13.314 }' 00:18:13.314 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.314 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 [2024-11-20 17:10:37.542965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.881 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:14.140 [2024-11-20 17:10:37.870674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:14.141 /dev/nbd0 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.141 1+0 records in 00:18:14.141 1+0 records out 00:18:14.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483198 s, 8.5 MB/s 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:14.141 17:10:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:15.075 7936+0 records in 00:18:15.075 7936+0 records out 00:18:15.075 32505856 bytes (33 MB, 31 MiB) copied, 0.937096 s, 34.7 MB/s 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.075 17:10:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:15.345 [2024-11-20 17:10:39.178336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.345 [2024-11-20 17:10:39.190493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.345 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.615 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.615 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.615 "name": "raid_bdev1", 00:18:15.615 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:15.615 "strip_size_kb": 0, 00:18:15.615 "state": "online", 00:18:15.615 "raid_level": "raid1", 00:18:15.615 "superblock": true, 00:18:15.615 "num_base_bdevs": 2, 00:18:15.615 "num_base_bdevs_discovered": 1, 00:18:15.615 "num_base_bdevs_operational": 1, 00:18:15.615 "base_bdevs_list": [ 00:18:15.615 { 00:18:15.615 "name": null, 00:18:15.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.615 "is_configured": false, 00:18:15.615 "data_offset": 0, 00:18:15.615 "data_size": 7936 00:18:15.615 }, 00:18:15.615 { 00:18:15.615 "name": "BaseBdev2", 00:18:15.615 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:15.615 "is_configured": true, 00:18:15.615 "data_offset": 256, 00:18:15.615 "data_size": 7936 00:18:15.615 } 00:18:15.615 ] 00:18:15.615 }' 00:18:15.615 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.615 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.874 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.874 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.874 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.874 [2024-11-20 17:10:39.690573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.874 [2024-11-20 17:10:39.704838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:15.874 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.874 17:10:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:15.874 [2024-11-20 17:10:39.707484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.250 "name": "raid_bdev1", 00:18:17.250 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:17.250 "strip_size_kb": 0, 00:18:17.250 "state": "online", 00:18:17.250 "raid_level": "raid1", 00:18:17.250 "superblock": true, 00:18:17.250 "num_base_bdevs": 2, 00:18:17.250 "num_base_bdevs_discovered": 2, 00:18:17.250 "num_base_bdevs_operational": 2, 00:18:17.250 "process": { 00:18:17.250 "type": "rebuild", 00:18:17.250 "target": "spare", 00:18:17.250 "progress": { 00:18:17.250 "blocks": 2560, 00:18:17.250 "percent": 32 00:18:17.250 } 00:18:17.250 }, 00:18:17.250 "base_bdevs_list": [ 00:18:17.250 { 00:18:17.250 "name": "spare", 00:18:17.250 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:17.250 "is_configured": true, 00:18:17.250 "data_offset": 256, 00:18:17.250 "data_size": 7936 00:18:17.250 }, 00:18:17.250 { 00:18:17.250 "name": "BaseBdev2", 00:18:17.250 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:17.250 "is_configured": true, 00:18:17.250 "data_offset": 256, 00:18:17.250 "data_size": 7936 00:18:17.250 } 00:18:17.250 ] 00:18:17.250 }' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.250 [2024-11-20 17:10:40.880989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.250 [2024-11-20 17:10:40.916940] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:17.250 [2024-11-20 17:10:40.917057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.250 [2024-11-20 17:10:40.917081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.250 [2024-11-20 17:10:40.917095] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.250 "name": "raid_bdev1", 00:18:17.250 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:17.250 "strip_size_kb": 0, 00:18:17.250 "state": "online", 00:18:17.250 "raid_level": "raid1", 00:18:17.250 "superblock": true, 00:18:17.250 "num_base_bdevs": 2, 00:18:17.250 "num_base_bdevs_discovered": 1, 00:18:17.250 "num_base_bdevs_operational": 1, 00:18:17.250 "base_bdevs_list": [ 00:18:17.250 { 00:18:17.250 "name": null, 00:18:17.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.250 "is_configured": false, 00:18:17.250 "data_offset": 0, 00:18:17.250 "data_size": 7936 00:18:17.250 }, 00:18:17.250 { 00:18:17.250 "name": "BaseBdev2", 00:18:17.250 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:17.250 "is_configured": true, 00:18:17.250 "data_offset": 256, 00:18:17.250 "data_size": 7936 00:18:17.250 } 00:18:17.250 ] 00:18:17.250 }' 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.250 17:10:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.818 "name": "raid_bdev1", 00:18:17.818 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:17.818 "strip_size_kb": 0, 00:18:17.818 "state": "online", 00:18:17.818 "raid_level": "raid1", 00:18:17.818 "superblock": true, 00:18:17.818 "num_base_bdevs": 2, 00:18:17.818 "num_base_bdevs_discovered": 1, 00:18:17.818 "num_base_bdevs_operational": 1, 00:18:17.818 "base_bdevs_list": [ 00:18:17.818 { 00:18:17.818 "name": null, 00:18:17.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.818 "is_configured": false, 00:18:17.818 "data_offset": 0, 00:18:17.818 "data_size": 7936 00:18:17.818 }, 00:18:17.818 { 00:18:17.818 "name": "BaseBdev2", 00:18:17.818 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:17.818 "is_configured": true, 00:18:17.818 "data_offset": 256, 00:18:17.818 "data_size": 7936 00:18:17.818 } 00:18:17.818 ] 00:18:17.818 }' 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.818 [2024-11-20 17:10:41.637962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.818 [2024-11-20 17:10:41.649661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.818 17:10:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:17.818 [2024-11-20 17:10:41.652237] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.194 "name": "raid_bdev1", 00:18:19.194 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:19.194 "strip_size_kb": 0, 00:18:19.194 "state": "online", 00:18:19.194 "raid_level": "raid1", 00:18:19.194 "superblock": true, 00:18:19.194 "num_base_bdevs": 2, 00:18:19.194 "num_base_bdevs_discovered": 2, 00:18:19.194 "num_base_bdevs_operational": 2, 00:18:19.194 "process": { 00:18:19.194 "type": "rebuild", 00:18:19.194 "target": "spare", 00:18:19.194 "progress": { 00:18:19.194 "blocks": 2560, 00:18:19.194 "percent": 32 00:18:19.194 } 00:18:19.194 }, 00:18:19.194 "base_bdevs_list": [ 00:18:19.194 { 00:18:19.194 "name": "spare", 00:18:19.194 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:19.194 "is_configured": true, 00:18:19.194 "data_offset": 256, 00:18:19.194 "data_size": 7936 00:18:19.194 }, 00:18:19.194 { 00:18:19.194 "name": "BaseBdev2", 00:18:19.194 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:19.194 "is_configured": true, 00:18:19.194 "data_offset": 256, 00:18:19.194 "data_size": 7936 00:18:19.194 } 00:18:19.194 ] 00:18:19.194 }' 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:19.194 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:19.195 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=758 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.195 "name": "raid_bdev1", 00:18:19.195 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:19.195 "strip_size_kb": 0, 00:18:19.195 "state": "online", 00:18:19.195 "raid_level": "raid1", 00:18:19.195 "superblock": true, 00:18:19.195 "num_base_bdevs": 2, 00:18:19.195 "num_base_bdevs_discovered": 2, 00:18:19.195 "num_base_bdevs_operational": 2, 00:18:19.195 "process": { 00:18:19.195 "type": "rebuild", 00:18:19.195 "target": "spare", 00:18:19.195 "progress": { 00:18:19.195 "blocks": 2816, 00:18:19.195 "percent": 35 00:18:19.195 } 00:18:19.195 }, 00:18:19.195 "base_bdevs_list": [ 00:18:19.195 { 00:18:19.195 "name": "spare", 00:18:19.195 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:19.195 "is_configured": true, 00:18:19.195 "data_offset": 256, 00:18:19.195 "data_size": 7936 00:18:19.195 }, 00:18:19.195 { 00:18:19.195 "name": "BaseBdev2", 00:18:19.195 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:19.195 "is_configured": true, 00:18:19.195 "data_offset": 256, 00:18:19.195 "data_size": 7936 00:18:19.195 } 00:18:19.195 ] 00:18:19.195 }' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.195 17:10:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.130 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.389 17:10:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.389 "name": "raid_bdev1", 00:18:20.389 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:20.389 "strip_size_kb": 0, 00:18:20.389 "state": "online", 00:18:20.389 "raid_level": "raid1", 00:18:20.389 "superblock": true, 00:18:20.389 "num_base_bdevs": 2, 00:18:20.389 "num_base_bdevs_discovered": 2, 00:18:20.389 "num_base_bdevs_operational": 2, 00:18:20.389 "process": { 00:18:20.389 "type": "rebuild", 00:18:20.389 "target": "spare", 00:18:20.389 "progress": { 00:18:20.389 "blocks": 5888, 00:18:20.389 "percent": 74 00:18:20.389 } 00:18:20.389 }, 00:18:20.389 "base_bdevs_list": [ 00:18:20.389 { 00:18:20.389 "name": "spare", 00:18:20.389 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:20.389 "is_configured": true, 00:18:20.389 "data_offset": 256, 00:18:20.389 "data_size": 7936 00:18:20.389 }, 00:18:20.389 { 00:18:20.389 "name": "BaseBdev2", 00:18:20.389 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:20.389 "is_configured": true, 00:18:20.389 "data_offset": 256, 00:18:20.389 "data_size": 7936 00:18:20.389 } 00:18:20.389 ] 00:18:20.389 }' 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.389 17:10:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.956 [2024-11-20 17:10:44.774277] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:20.956 [2024-11-20 17:10:44.774361] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:20.956 [2024-11-20 17:10:44.774548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.525 "name": "raid_bdev1", 00:18:21.525 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:21.525 "strip_size_kb": 0, 00:18:21.525 "state": "online", 00:18:21.525 "raid_level": "raid1", 00:18:21.525 "superblock": true, 00:18:21.525 "num_base_bdevs": 2, 00:18:21.525 "num_base_bdevs_discovered": 2, 00:18:21.525 "num_base_bdevs_operational": 2, 00:18:21.525 "base_bdevs_list": [ 00:18:21.525 { 00:18:21.525 "name": "spare", 00:18:21.525 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:21.525 "is_configured": true, 00:18:21.525 "data_offset": 256, 00:18:21.525 "data_size": 7936 00:18:21.525 }, 00:18:21.525 { 00:18:21.525 "name": "BaseBdev2", 00:18:21.525 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:21.525 "is_configured": true, 00:18:21.525 "data_offset": 256, 00:18:21.525 "data_size": 7936 00:18:21.525 } 00:18:21.525 ] 00:18:21.525 }' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.525 "name": "raid_bdev1", 00:18:21.525 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:21.525 "strip_size_kb": 0, 00:18:21.525 "state": "online", 00:18:21.525 "raid_level": "raid1", 00:18:21.525 "superblock": true, 00:18:21.525 "num_base_bdevs": 2, 00:18:21.525 "num_base_bdevs_discovered": 2, 00:18:21.525 "num_base_bdevs_operational": 2, 00:18:21.525 "base_bdevs_list": [ 00:18:21.525 { 00:18:21.525 "name": "spare", 00:18:21.525 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:21.525 "is_configured": true, 00:18:21.525 "data_offset": 256, 00:18:21.525 "data_size": 7936 00:18:21.525 }, 00:18:21.525 { 00:18:21.525 "name": "BaseBdev2", 00:18:21.525 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:21.525 "is_configured": true, 00:18:21.525 "data_offset": 256, 00:18:21.525 "data_size": 7936 00:18:21.525 } 00:18:21.525 ] 00:18:21.525 }' 00:18:21.525 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.784 "name": "raid_bdev1", 00:18:21.784 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:21.784 "strip_size_kb": 0, 00:18:21.784 "state": "online", 00:18:21.784 "raid_level": "raid1", 00:18:21.784 "superblock": true, 00:18:21.784 "num_base_bdevs": 2, 00:18:21.784 "num_base_bdevs_discovered": 2, 00:18:21.784 "num_base_bdevs_operational": 2, 00:18:21.784 "base_bdevs_list": [ 00:18:21.784 { 00:18:21.784 "name": "spare", 00:18:21.784 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:21.784 "is_configured": true, 00:18:21.784 "data_offset": 256, 00:18:21.784 "data_size": 7936 00:18:21.784 }, 00:18:21.784 { 00:18:21.784 "name": "BaseBdev2", 00:18:21.784 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:21.784 "is_configured": true, 00:18:21.784 "data_offset": 256, 00:18:21.784 "data_size": 7936 00:18:21.784 } 00:18:21.784 ] 00:18:21.784 }' 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.784 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 [2024-11-20 17:10:45.975569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.352 [2024-11-20 17:10:45.975655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.352 [2024-11-20 17:10:45.975778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.352 [2024-11-20 17:10:45.975873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.352 [2024-11-20 17:10:45.975890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:22.352 17:10:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.352 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:22.611 /dev/nbd0 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.611 1+0 records in 00:18:22.611 1+0 records out 00:18:22.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024788 s, 16.5 MB/s 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.611 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:22.870 /dev/nbd1 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:22.870 1+0 records in 00:18:22.870 1+0 records out 00:18:22.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043998 s, 9.3 MB/s 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:22.870 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.128 17:10:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.387 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.646 [2024-11-20 17:10:47.391861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.646 [2024-11-20 17:10:47.391930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.646 [2024-11-20 17:10:47.391964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:23.646 [2024-11-20 17:10:47.391980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.646 [2024-11-20 17:10:47.394789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.646 [2024-11-20 17:10:47.394858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.646 [2024-11-20 17:10:47.394961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:23.646 [2024-11-20 17:10:47.395040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.646 [2024-11-20 17:10:47.395257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.646 spare 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.646 [2024-11-20 17:10:47.495365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:23.646 [2024-11-20 17:10:47.495415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:23.646 [2024-11-20 17:10:47.495531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:23.646 [2024-11-20 17:10:47.495785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:23.646 [2024-11-20 17:10:47.495829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:23.646 [2024-11-20 17:10:47.496012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.646 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.905 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.905 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.905 "name": "raid_bdev1", 00:18:23.905 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:23.905 "strip_size_kb": 0, 00:18:23.905 "state": "online", 00:18:23.905 "raid_level": "raid1", 00:18:23.905 "superblock": true, 00:18:23.905 "num_base_bdevs": 2, 00:18:23.905 "num_base_bdevs_discovered": 2, 00:18:23.905 "num_base_bdevs_operational": 2, 00:18:23.905 "base_bdevs_list": [ 00:18:23.905 { 00:18:23.905 "name": "spare", 00:18:23.905 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:23.905 "is_configured": true, 00:18:23.905 "data_offset": 256, 00:18:23.905 "data_size": 7936 00:18:23.905 }, 00:18:23.905 { 00:18:23.905 "name": "BaseBdev2", 00:18:23.905 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:23.905 "is_configured": true, 00:18:23.905 "data_offset": 256, 00:18:23.905 "data_size": 7936 00:18:23.905 } 00:18:23.905 ] 00:18:23.905 }' 00:18:23.905 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.905 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.164 17:10:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.164 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.424 "name": "raid_bdev1", 00:18:24.424 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:24.424 "strip_size_kb": 0, 00:18:24.424 "state": "online", 00:18:24.424 "raid_level": "raid1", 00:18:24.424 "superblock": true, 00:18:24.424 "num_base_bdevs": 2, 00:18:24.424 "num_base_bdevs_discovered": 2, 00:18:24.424 "num_base_bdevs_operational": 2, 00:18:24.424 "base_bdevs_list": [ 00:18:24.424 { 00:18:24.424 "name": "spare", 00:18:24.424 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:24.424 "is_configured": true, 00:18:24.424 "data_offset": 256, 00:18:24.424 "data_size": 7936 00:18:24.424 }, 00:18:24.424 { 00:18:24.424 "name": "BaseBdev2", 00:18:24.424 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:24.424 "is_configured": true, 00:18:24.424 "data_offset": 256, 00:18:24.424 "data_size": 7936 00:18:24.424 } 00:18:24.424 ] 00:18:24.424 }' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.424 [2024-11-20 17:10:48.180347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.424 "name": "raid_bdev1", 00:18:24.424 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:24.424 "strip_size_kb": 0, 00:18:24.424 "state": "online", 00:18:24.424 "raid_level": "raid1", 00:18:24.424 "superblock": true, 00:18:24.424 "num_base_bdevs": 2, 00:18:24.424 "num_base_bdevs_discovered": 1, 00:18:24.424 "num_base_bdevs_operational": 1, 00:18:24.424 "base_bdevs_list": [ 00:18:24.424 { 00:18:24.424 "name": null, 00:18:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.424 "is_configured": false, 00:18:24.424 "data_offset": 0, 00:18:24.424 "data_size": 7936 00:18:24.424 }, 00:18:24.424 { 00:18:24.424 "name": "BaseBdev2", 00:18:24.424 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:24.424 "is_configured": true, 00:18:24.424 "data_offset": 256, 00:18:24.424 "data_size": 7936 00:18:24.424 } 00:18:24.424 ] 00:18:24.424 }' 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.424 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.991 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.991 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.991 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.991 [2024-11-20 17:10:48.664609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.991 [2024-11-20 17:10:48.664900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.991 [2024-11-20 17:10:48.664928] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.991 [2024-11-20 17:10:48.665006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.991 [2024-11-20 17:10:48.678532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:24.991 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.991 17:10:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:24.991 [2024-11-20 17:10:48.681241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.009 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.010 "name": "raid_bdev1", 00:18:26.010 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:26.010 "strip_size_kb": 0, 00:18:26.010 "state": "online", 00:18:26.010 "raid_level": "raid1", 00:18:26.010 "superblock": true, 00:18:26.010 "num_base_bdevs": 2, 00:18:26.010 "num_base_bdevs_discovered": 2, 00:18:26.010 "num_base_bdevs_operational": 2, 00:18:26.010 "process": { 00:18:26.010 "type": "rebuild", 00:18:26.010 "target": "spare", 00:18:26.010 "progress": { 00:18:26.010 "blocks": 2560, 00:18:26.010 "percent": 32 00:18:26.010 } 00:18:26.010 }, 00:18:26.010 "base_bdevs_list": [ 00:18:26.010 { 00:18:26.010 "name": "spare", 00:18:26.010 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:26.010 "is_configured": true, 00:18:26.010 "data_offset": 256, 00:18:26.010 "data_size": 7936 00:18:26.010 }, 00:18:26.010 { 00:18:26.010 "name": "BaseBdev2", 00:18:26.010 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:26.010 "is_configured": true, 00:18:26.010 "data_offset": 256, 00:18:26.010 "data_size": 7936 00:18:26.010 } 00:18:26.010 ] 00:18:26.010 }' 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.010 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.010 [2024-11-20 17:10:49.854949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.269 [2024-11-20 17:10:49.890341] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.269 [2024-11-20 17:10:49.890432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.269 [2024-11-20 17:10:49.890461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.269 [2024-11-20 17:10:49.890531] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.269 "name": "raid_bdev1", 00:18:26.269 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:26.269 "strip_size_kb": 0, 00:18:26.269 "state": "online", 00:18:26.269 "raid_level": "raid1", 00:18:26.269 "superblock": true, 00:18:26.269 "num_base_bdevs": 2, 00:18:26.269 "num_base_bdevs_discovered": 1, 00:18:26.269 "num_base_bdevs_operational": 1, 00:18:26.269 "base_bdevs_list": [ 00:18:26.269 { 00:18:26.269 "name": null, 00:18:26.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.269 "is_configured": false, 00:18:26.269 "data_offset": 0, 00:18:26.269 "data_size": 7936 00:18:26.269 }, 00:18:26.269 { 00:18:26.269 "name": "BaseBdev2", 00:18:26.269 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:26.269 "is_configured": true, 00:18:26.269 "data_offset": 256, 00:18:26.269 "data_size": 7936 00:18:26.269 } 00:18:26.269 ] 00:18:26.269 }' 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.269 17:10:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.838 17:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.838 17:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.838 17:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.838 [2024-11-20 17:10:50.429182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.838 [2024-11-20 17:10:50.429287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.838 [2024-11-20 17:10:50.429320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:26.838 [2024-11-20 17:10:50.429339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.838 [2024-11-20 17:10:50.429706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.838 [2024-11-20 17:10:50.429749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.838 [2024-11-20 17:10:50.429845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:26.838 [2024-11-20 17:10:50.429869] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.838 [2024-11-20 17:10:50.429883] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:26.838 [2024-11-20 17:10:50.429914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.838 [2024-11-20 17:10:50.442381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:26.838 spare 00:18:26.838 17:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.838 17:10:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:26.838 [2024-11-20 17:10:50.445024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.772 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.772 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.772 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.772 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.773 "name": "raid_bdev1", 00:18:27.773 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:27.773 "strip_size_kb": 0, 00:18:27.773 "state": "online", 00:18:27.773 "raid_level": "raid1", 00:18:27.773 "superblock": true, 00:18:27.773 "num_base_bdevs": 2, 00:18:27.773 "num_base_bdevs_discovered": 2, 00:18:27.773 "num_base_bdevs_operational": 2, 00:18:27.773 "process": { 00:18:27.773 "type": "rebuild", 00:18:27.773 "target": "spare", 00:18:27.773 "progress": { 00:18:27.773 "blocks": 2560, 00:18:27.773 "percent": 32 00:18:27.773 } 00:18:27.773 }, 00:18:27.773 "base_bdevs_list": [ 00:18:27.773 { 00:18:27.773 "name": "spare", 00:18:27.773 "uuid": "38593834-8fa3-59c2-94c3-95a736c1bd4a", 00:18:27.773 "is_configured": true, 00:18:27.773 "data_offset": 256, 00:18:27.773 "data_size": 7936 00:18:27.773 }, 00:18:27.773 { 00:18:27.773 "name": "BaseBdev2", 00:18:27.773 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:27.773 "is_configured": true, 00:18:27.773 "data_offset": 256, 00:18:27.773 "data_size": 7936 00:18:27.773 } 00:18:27.773 ] 00:18:27.773 }' 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.773 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.773 [2024-11-20 17:10:51.607291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.031 [2024-11-20 17:10:51.653735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.031 [2024-11-20 17:10:51.653840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.031 [2024-11-20 17:10:51.653866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.031 [2024-11-20 17:10:51.653877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.031 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.032 "name": "raid_bdev1", 00:18:28.032 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:28.032 "strip_size_kb": 0, 00:18:28.032 "state": "online", 00:18:28.032 "raid_level": "raid1", 00:18:28.032 "superblock": true, 00:18:28.032 "num_base_bdevs": 2, 00:18:28.032 "num_base_bdevs_discovered": 1, 00:18:28.032 "num_base_bdevs_operational": 1, 00:18:28.032 "base_bdevs_list": [ 00:18:28.032 { 00:18:28.032 "name": null, 00:18:28.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.032 "is_configured": false, 00:18:28.032 "data_offset": 0, 00:18:28.032 "data_size": 7936 00:18:28.032 }, 00:18:28.032 { 00:18:28.032 "name": "BaseBdev2", 00:18:28.032 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:28.032 "is_configured": true, 00:18:28.032 "data_offset": 256, 00:18:28.032 "data_size": 7936 00:18:28.032 } 00:18:28.032 ] 00:18:28.032 }' 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.032 17:10:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.600 "name": "raid_bdev1", 00:18:28.600 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:28.600 "strip_size_kb": 0, 00:18:28.600 "state": "online", 00:18:28.600 "raid_level": "raid1", 00:18:28.600 "superblock": true, 00:18:28.600 "num_base_bdevs": 2, 00:18:28.600 "num_base_bdevs_discovered": 1, 00:18:28.600 "num_base_bdevs_operational": 1, 00:18:28.600 "base_bdevs_list": [ 00:18:28.600 { 00:18:28.600 "name": null, 00:18:28.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.600 "is_configured": false, 00:18:28.600 "data_offset": 0, 00:18:28.600 "data_size": 7936 00:18:28.600 }, 00:18:28.600 { 00:18:28.600 "name": "BaseBdev2", 00:18:28.600 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:28.600 "is_configured": true, 00:18:28.600 "data_offset": 256, 00:18:28.600 "data_size": 7936 00:18:28.600 } 00:18:28.600 ] 00:18:28.600 }' 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.600 [2024-11-20 17:10:52.334798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.600 [2024-11-20 17:10:52.334891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.600 [2024-11-20 17:10:52.334924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:28.600 [2024-11-20 17:10:52.334939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.600 [2024-11-20 17:10:52.335244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.600 [2024-11-20 17:10:52.335267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.600 [2024-11-20 17:10:52.335349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:28.600 [2024-11-20 17:10:52.335370] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.600 [2024-11-20 17:10:52.335384] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.600 [2024-11-20 17:10:52.335396] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:28.600 BaseBdev1 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.600 17:10:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.537 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.538 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.538 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.538 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.538 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.796 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.796 "name": "raid_bdev1", 00:18:29.796 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:29.796 "strip_size_kb": 0, 00:18:29.796 "state": "online", 00:18:29.796 "raid_level": "raid1", 00:18:29.796 "superblock": true, 00:18:29.796 "num_base_bdevs": 2, 00:18:29.796 "num_base_bdevs_discovered": 1, 00:18:29.796 "num_base_bdevs_operational": 1, 00:18:29.796 "base_bdevs_list": [ 00:18:29.796 { 00:18:29.796 "name": null, 00:18:29.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.796 "is_configured": false, 00:18:29.796 "data_offset": 0, 00:18:29.796 "data_size": 7936 00:18:29.796 }, 00:18:29.796 { 00:18:29.796 "name": "BaseBdev2", 00:18:29.796 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:29.796 "is_configured": true, 00:18:29.796 "data_offset": 256, 00:18:29.796 "data_size": 7936 00:18:29.796 } 00:18:29.796 ] 00:18:29.796 }' 00:18:29.796 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.796 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.055 "name": "raid_bdev1", 00:18:30.055 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:30.055 "strip_size_kb": 0, 00:18:30.055 "state": "online", 00:18:30.055 "raid_level": "raid1", 00:18:30.055 "superblock": true, 00:18:30.055 "num_base_bdevs": 2, 00:18:30.055 "num_base_bdevs_discovered": 1, 00:18:30.055 "num_base_bdevs_operational": 1, 00:18:30.055 "base_bdevs_list": [ 00:18:30.055 { 00:18:30.055 "name": null, 00:18:30.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.055 "is_configured": false, 00:18:30.055 "data_offset": 0, 00:18:30.055 "data_size": 7936 00:18:30.055 }, 00:18:30.055 { 00:18:30.055 "name": "BaseBdev2", 00:18:30.055 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:30.055 "is_configured": true, 00:18:30.055 "data_offset": 256, 00:18:30.055 "data_size": 7936 00:18:30.055 } 00:18:30.055 ] 00:18:30.055 }' 00:18:30.055 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.314 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.314 17:10:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.314 [2024-11-20 17:10:54.015511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.314 [2024-11-20 17:10:54.015787] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.314 [2024-11-20 17:10:54.015812] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.314 request: 00:18:30.314 { 00:18:30.314 "base_bdev": "BaseBdev1", 00:18:30.314 "raid_bdev": "raid_bdev1", 00:18:30.314 "method": "bdev_raid_add_base_bdev", 00:18:30.314 "req_id": 1 00:18:30.314 } 00:18:30.314 Got JSON-RPC error response 00:18:30.314 response: 00:18:30.314 { 00:18:30.314 "code": -22, 00:18:30.314 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:30.314 } 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.314 17:10:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.251 "name": "raid_bdev1", 00:18:31.251 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:31.251 "strip_size_kb": 0, 00:18:31.251 "state": "online", 00:18:31.251 "raid_level": "raid1", 00:18:31.251 "superblock": true, 00:18:31.251 "num_base_bdevs": 2, 00:18:31.251 "num_base_bdevs_discovered": 1, 00:18:31.251 "num_base_bdevs_operational": 1, 00:18:31.251 "base_bdevs_list": [ 00:18:31.251 { 00:18:31.251 "name": null, 00:18:31.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.251 "is_configured": false, 00:18:31.251 "data_offset": 0, 00:18:31.251 "data_size": 7936 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "name": "BaseBdev2", 00:18:31.251 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:31.251 "is_configured": true, 00:18:31.251 "data_offset": 256, 00:18:31.251 "data_size": 7936 00:18:31.251 } 00:18:31.251 ] 00:18:31.251 }' 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.251 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.819 "name": "raid_bdev1", 00:18:31.819 "uuid": "6a91ceda-d768-40f9-bc0c-a10cea585139", 00:18:31.819 "strip_size_kb": 0, 00:18:31.819 "state": "online", 00:18:31.819 "raid_level": "raid1", 00:18:31.819 "superblock": true, 00:18:31.819 "num_base_bdevs": 2, 00:18:31.819 "num_base_bdevs_discovered": 1, 00:18:31.819 "num_base_bdevs_operational": 1, 00:18:31.819 "base_bdevs_list": [ 00:18:31.819 { 00:18:31.819 "name": null, 00:18:31.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.819 "is_configured": false, 00:18:31.819 "data_offset": 0, 00:18:31.819 "data_size": 7936 00:18:31.819 }, 00:18:31.819 { 00:18:31.819 "name": "BaseBdev2", 00:18:31.819 "uuid": "236c5679-12a8-5db5-beac-a08a55df12ed", 00:18:31.819 "is_configured": true, 00:18:31.819 "data_offset": 256, 00:18:31.819 "data_size": 7936 00:18:31.819 } 00:18:31.819 ] 00:18:31.819 }' 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.819 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87987 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87987 ']' 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87987 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87987 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.079 killing process with pid 87987 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87987' 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87987 00:18:32.079 Received shutdown signal, test time was about 60.000000 seconds 00:18:32.079 00:18:32.079 Latency(us) 00:18:32.079 [2024-11-20T17:10:55.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.079 [2024-11-20T17:10:55.948Z] =================================================================================================================== 00:18:32.079 [2024-11-20T17:10:55.948Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.079 [2024-11-20 17:10:55.733028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.079 17:10:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87987 00:18:32.079 [2024-11-20 17:10:55.733183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.079 [2024-11-20 17:10:55.733246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.079 [2024-11-20 17:10:55.733272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:32.338 [2024-11-20 17:10:56.018724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.276 17:10:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:33.276 00:18:33.276 real 0m21.316s 00:18:33.276 user 0m28.784s 00:18:33.276 sys 0m2.622s 00:18:33.276 17:10:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.276 17:10:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.276 ************************************ 00:18:33.276 END TEST raid_rebuild_test_sb_md_separate 00:18:33.276 ************************************ 00:18:33.276 17:10:57 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:33.276 17:10:57 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:33.276 17:10:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:33.276 17:10:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.276 17:10:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.276 ************************************ 00:18:33.276 START TEST raid_state_function_test_sb_md_interleaved 00:18:33.276 ************************************ 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.276 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88689 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:33.277 Process raid pid: 88689 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88689' 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88689 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88689 ']' 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.277 17:10:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.535 [2024-11-20 17:10:57.211580] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:33.535 [2024-11-20 17:10:57.211816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.535 [2024-11-20 17:10:57.402689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.792 [2024-11-20 17:10:57.537938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.070 [2024-11-20 17:10:57.760682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.071 [2024-11-20 17:10:57.760754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.643 [2024-11-20 17:10:58.219487] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:34.643 [2024-11-20 17:10:58.219552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:34.643 [2024-11-20 17:10:58.219569] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:34.643 [2024-11-20 17:10:58.219585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.643 "name": "Existed_Raid", 00:18:34.643 "uuid": "4a3e8389-42ad-4041-895e-f61ff56dbee1", 00:18:34.643 "strip_size_kb": 0, 00:18:34.643 "state": "configuring", 00:18:34.643 "raid_level": "raid1", 00:18:34.643 "superblock": true, 00:18:34.643 "num_base_bdevs": 2, 00:18:34.643 "num_base_bdevs_discovered": 0, 00:18:34.643 "num_base_bdevs_operational": 2, 00:18:34.643 "base_bdevs_list": [ 00:18:34.643 { 00:18:34.643 "name": "BaseBdev1", 00:18:34.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.643 "is_configured": false, 00:18:34.643 "data_offset": 0, 00:18:34.643 "data_size": 0 00:18:34.643 }, 00:18:34.643 { 00:18:34.643 "name": "BaseBdev2", 00:18:34.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.643 "is_configured": false, 00:18:34.643 "data_offset": 0, 00:18:34.643 "data_size": 0 00:18:34.643 } 00:18:34.643 ] 00:18:34.643 }' 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.643 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 [2024-11-20 17:10:58.779610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.210 [2024-11-20 17:10:58.779697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 [2024-11-20 17:10:58.787588] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:35.210 [2024-11-20 17:10:58.787686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:35.210 [2024-11-20 17:10:58.787702] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.210 [2024-11-20 17:10:58.787721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 [2024-11-20 17:10:58.831978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.210 BaseBdev1 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.210 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 [ 00:18:35.210 { 00:18:35.210 "name": "BaseBdev1", 00:18:35.210 "aliases": [ 00:18:35.210 "daebf8d5-eb3f-4ebe-b891-d72ac33a191a" 00:18:35.210 ], 00:18:35.210 "product_name": "Malloc disk", 00:18:35.210 "block_size": 4128, 00:18:35.210 "num_blocks": 8192, 00:18:35.210 "uuid": "daebf8d5-eb3f-4ebe-b891-d72ac33a191a", 00:18:35.210 "md_size": 32, 00:18:35.210 "md_interleave": true, 00:18:35.210 "dif_type": 0, 00:18:35.210 "assigned_rate_limits": { 00:18:35.210 "rw_ios_per_sec": 0, 00:18:35.210 "rw_mbytes_per_sec": 0, 00:18:35.210 "r_mbytes_per_sec": 0, 00:18:35.210 "w_mbytes_per_sec": 0 00:18:35.210 }, 00:18:35.210 "claimed": true, 00:18:35.210 "claim_type": "exclusive_write", 00:18:35.210 "zoned": false, 00:18:35.210 "supported_io_types": { 00:18:35.211 "read": true, 00:18:35.211 "write": true, 00:18:35.211 "unmap": true, 00:18:35.211 "flush": true, 00:18:35.211 "reset": true, 00:18:35.211 "nvme_admin": false, 00:18:35.211 "nvme_io": false, 00:18:35.211 "nvme_io_md": false, 00:18:35.211 "write_zeroes": true, 00:18:35.211 "zcopy": true, 00:18:35.211 "get_zone_info": false, 00:18:35.211 "zone_management": false, 00:18:35.211 "zone_append": false, 00:18:35.211 "compare": false, 00:18:35.211 "compare_and_write": false, 00:18:35.211 "abort": true, 00:18:35.211 "seek_hole": false, 00:18:35.211 "seek_data": false, 00:18:35.211 "copy": true, 00:18:35.211 "nvme_iov_md": false 00:18:35.211 }, 00:18:35.211 "memory_domains": [ 00:18:35.211 { 00:18:35.211 "dma_device_id": "system", 00:18:35.211 "dma_device_type": 1 00:18:35.211 }, 00:18:35.211 { 00:18:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.211 "dma_device_type": 2 00:18:35.211 } 00:18:35.211 ], 00:18:35.211 "driver_specific": {} 00:18:35.211 } 00:18:35.211 ] 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.211 "name": "Existed_Raid", 00:18:35.211 "uuid": "0bb4a047-10e0-4096-b98d-a30a33448703", 00:18:35.211 "strip_size_kb": 0, 00:18:35.211 "state": "configuring", 00:18:35.211 "raid_level": "raid1", 00:18:35.211 "superblock": true, 00:18:35.211 "num_base_bdevs": 2, 00:18:35.211 "num_base_bdevs_discovered": 1, 00:18:35.211 "num_base_bdevs_operational": 2, 00:18:35.211 "base_bdevs_list": [ 00:18:35.211 { 00:18:35.211 "name": "BaseBdev1", 00:18:35.211 "uuid": "daebf8d5-eb3f-4ebe-b891-d72ac33a191a", 00:18:35.211 "is_configured": true, 00:18:35.211 "data_offset": 256, 00:18:35.211 "data_size": 7936 00:18:35.211 }, 00:18:35.211 { 00:18:35.211 "name": "BaseBdev2", 00:18:35.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.211 "is_configured": false, 00:18:35.211 "data_offset": 0, 00:18:35.211 "data_size": 0 00:18:35.211 } 00:18:35.211 ] 00:18:35.211 }' 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.211 17:10:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.779 [2024-11-20 17:10:59.420280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:35.779 [2024-11-20 17:10:59.420357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.779 [2024-11-20 17:10:59.428332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.779 [2024-11-20 17:10:59.430925] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.779 [2024-11-20 17:10:59.430983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.779 "name": "Existed_Raid", 00:18:35.779 "uuid": "9fe0138b-730e-4725-9a7e-1dea1cec1542", 00:18:35.779 "strip_size_kb": 0, 00:18:35.779 "state": "configuring", 00:18:35.779 "raid_level": "raid1", 00:18:35.779 "superblock": true, 00:18:35.779 "num_base_bdevs": 2, 00:18:35.779 "num_base_bdevs_discovered": 1, 00:18:35.779 "num_base_bdevs_operational": 2, 00:18:35.779 "base_bdevs_list": [ 00:18:35.779 { 00:18:35.779 "name": "BaseBdev1", 00:18:35.779 "uuid": "daebf8d5-eb3f-4ebe-b891-d72ac33a191a", 00:18:35.779 "is_configured": true, 00:18:35.779 "data_offset": 256, 00:18:35.779 "data_size": 7936 00:18:35.779 }, 00:18:35.779 { 00:18:35.779 "name": "BaseBdev2", 00:18:35.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.779 "is_configured": false, 00:18:35.779 "data_offset": 0, 00:18:35.779 "data_size": 0 00:18:35.779 } 00:18:35.779 ] 00:18:35.779 }' 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.779 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.347 [2024-11-20 17:10:59.980591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.347 [2024-11-20 17:10:59.980856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:36.347 [2024-11-20 17:10:59.980875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:36.347 [2024-11-20 17:10:59.981012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:36.347 [2024-11-20 17:10:59.981125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:36.347 [2024-11-20 17:10:59.981155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:36.347 BaseBdev2 00:18:36.347 [2024-11-20 17:10:59.981248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.347 17:10:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.347 [ 00:18:36.347 { 00:18:36.347 "name": "BaseBdev2", 00:18:36.347 "aliases": [ 00:18:36.347 "e92f478c-7635-4be4-9c3e-3256674bf39e" 00:18:36.347 ], 00:18:36.347 "product_name": "Malloc disk", 00:18:36.347 "block_size": 4128, 00:18:36.347 "num_blocks": 8192, 00:18:36.347 "uuid": "e92f478c-7635-4be4-9c3e-3256674bf39e", 00:18:36.347 "md_size": 32, 00:18:36.348 "md_interleave": true, 00:18:36.348 "dif_type": 0, 00:18:36.348 "assigned_rate_limits": { 00:18:36.348 "rw_ios_per_sec": 0, 00:18:36.348 "rw_mbytes_per_sec": 0, 00:18:36.348 "r_mbytes_per_sec": 0, 00:18:36.348 "w_mbytes_per_sec": 0 00:18:36.348 }, 00:18:36.348 "claimed": true, 00:18:36.348 "claim_type": "exclusive_write", 00:18:36.348 "zoned": false, 00:18:36.348 "supported_io_types": { 00:18:36.348 "read": true, 00:18:36.348 "write": true, 00:18:36.348 "unmap": true, 00:18:36.348 "flush": true, 00:18:36.348 "reset": true, 00:18:36.348 "nvme_admin": false, 00:18:36.348 "nvme_io": false, 00:18:36.348 "nvme_io_md": false, 00:18:36.348 "write_zeroes": true, 00:18:36.348 "zcopy": true, 00:18:36.348 "get_zone_info": false, 00:18:36.348 "zone_management": false, 00:18:36.348 "zone_append": false, 00:18:36.348 "compare": false, 00:18:36.348 "compare_and_write": false, 00:18:36.348 "abort": true, 00:18:36.348 "seek_hole": false, 00:18:36.348 "seek_data": false, 00:18:36.348 "copy": true, 00:18:36.348 "nvme_iov_md": false 00:18:36.348 }, 00:18:36.348 "memory_domains": [ 00:18:36.348 { 00:18:36.348 "dma_device_id": "system", 00:18:36.348 "dma_device_type": 1 00:18:36.348 }, 00:18:36.348 { 00:18:36.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.348 "dma_device_type": 2 00:18:36.348 } 00:18:36.348 ], 00:18:36.348 "driver_specific": {} 00:18:36.348 } 00:18:36.348 ] 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.348 "name": "Existed_Raid", 00:18:36.348 "uuid": "9fe0138b-730e-4725-9a7e-1dea1cec1542", 00:18:36.348 "strip_size_kb": 0, 00:18:36.348 "state": "online", 00:18:36.348 "raid_level": "raid1", 00:18:36.348 "superblock": true, 00:18:36.348 "num_base_bdevs": 2, 00:18:36.348 "num_base_bdevs_discovered": 2, 00:18:36.348 "num_base_bdevs_operational": 2, 00:18:36.348 "base_bdevs_list": [ 00:18:36.348 { 00:18:36.348 "name": "BaseBdev1", 00:18:36.348 "uuid": "daebf8d5-eb3f-4ebe-b891-d72ac33a191a", 00:18:36.348 "is_configured": true, 00:18:36.348 "data_offset": 256, 00:18:36.348 "data_size": 7936 00:18:36.348 }, 00:18:36.348 { 00:18:36.348 "name": "BaseBdev2", 00:18:36.348 "uuid": "e92f478c-7635-4be4-9c3e-3256674bf39e", 00:18:36.348 "is_configured": true, 00:18:36.348 "data_offset": 256, 00:18:36.348 "data_size": 7936 00:18:36.348 } 00:18:36.348 ] 00:18:36.348 }' 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.348 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.915 [2024-11-20 17:11:00.549233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.915 "name": "Existed_Raid", 00:18:36.915 "aliases": [ 00:18:36.915 "9fe0138b-730e-4725-9a7e-1dea1cec1542" 00:18:36.915 ], 00:18:36.915 "product_name": "Raid Volume", 00:18:36.915 "block_size": 4128, 00:18:36.915 "num_blocks": 7936, 00:18:36.915 "uuid": "9fe0138b-730e-4725-9a7e-1dea1cec1542", 00:18:36.915 "md_size": 32, 00:18:36.915 "md_interleave": true, 00:18:36.915 "dif_type": 0, 00:18:36.915 "assigned_rate_limits": { 00:18:36.915 "rw_ios_per_sec": 0, 00:18:36.915 "rw_mbytes_per_sec": 0, 00:18:36.915 "r_mbytes_per_sec": 0, 00:18:36.915 "w_mbytes_per_sec": 0 00:18:36.915 }, 00:18:36.915 "claimed": false, 00:18:36.915 "zoned": false, 00:18:36.915 "supported_io_types": { 00:18:36.915 "read": true, 00:18:36.915 "write": true, 00:18:36.915 "unmap": false, 00:18:36.915 "flush": false, 00:18:36.915 "reset": true, 00:18:36.915 "nvme_admin": false, 00:18:36.915 "nvme_io": false, 00:18:36.915 "nvme_io_md": false, 00:18:36.915 "write_zeroes": true, 00:18:36.915 "zcopy": false, 00:18:36.915 "get_zone_info": false, 00:18:36.915 "zone_management": false, 00:18:36.915 "zone_append": false, 00:18:36.915 "compare": false, 00:18:36.915 "compare_and_write": false, 00:18:36.915 "abort": false, 00:18:36.915 "seek_hole": false, 00:18:36.915 "seek_data": false, 00:18:36.915 "copy": false, 00:18:36.915 "nvme_iov_md": false 00:18:36.915 }, 00:18:36.915 "memory_domains": [ 00:18:36.915 { 00:18:36.915 "dma_device_id": "system", 00:18:36.915 "dma_device_type": 1 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.915 "dma_device_type": 2 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "dma_device_id": "system", 00:18:36.915 "dma_device_type": 1 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.915 "dma_device_type": 2 00:18:36.915 } 00:18:36.915 ], 00:18:36.915 "driver_specific": { 00:18:36.915 "raid": { 00:18:36.915 "uuid": "9fe0138b-730e-4725-9a7e-1dea1cec1542", 00:18:36.915 "strip_size_kb": 0, 00:18:36.915 "state": "online", 00:18:36.915 "raid_level": "raid1", 00:18:36.915 "superblock": true, 00:18:36.915 "num_base_bdevs": 2, 00:18:36.915 "num_base_bdevs_discovered": 2, 00:18:36.915 "num_base_bdevs_operational": 2, 00:18:36.915 "base_bdevs_list": [ 00:18:36.915 { 00:18:36.915 "name": "BaseBdev1", 00:18:36.915 "uuid": "daebf8d5-eb3f-4ebe-b891-d72ac33a191a", 00:18:36.915 "is_configured": true, 00:18:36.915 "data_offset": 256, 00:18:36.915 "data_size": 7936 00:18:36.915 }, 00:18:36.915 { 00:18:36.915 "name": "BaseBdev2", 00:18:36.915 "uuid": "e92f478c-7635-4be4-9c3e-3256674bf39e", 00:18:36.915 "is_configured": true, 00:18:36.915 "data_offset": 256, 00:18:36.915 "data_size": 7936 00:18:36.915 } 00:18:36.915 ] 00:18:36.915 } 00:18:36.915 } 00:18:36.915 }' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:36.915 BaseBdev2' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.915 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.173 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.173 [2024-11-20 17:11:00.824938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.174 "name": "Existed_Raid", 00:18:37.174 "uuid": "9fe0138b-730e-4725-9a7e-1dea1cec1542", 00:18:37.174 "strip_size_kb": 0, 00:18:37.174 "state": "online", 00:18:37.174 "raid_level": "raid1", 00:18:37.174 "superblock": true, 00:18:37.174 "num_base_bdevs": 2, 00:18:37.174 "num_base_bdevs_discovered": 1, 00:18:37.174 "num_base_bdevs_operational": 1, 00:18:37.174 "base_bdevs_list": [ 00:18:37.174 { 00:18:37.174 "name": null, 00:18:37.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.174 "is_configured": false, 00:18:37.174 "data_offset": 0, 00:18:37.174 "data_size": 7936 00:18:37.174 }, 00:18:37.174 { 00:18:37.174 "name": "BaseBdev2", 00:18:37.174 "uuid": "e92f478c-7635-4be4-9c3e-3256674bf39e", 00:18:37.174 "is_configured": true, 00:18:37.174 "data_offset": 256, 00:18:37.174 "data_size": 7936 00:18:37.174 } 00:18:37.174 ] 00:18:37.174 }' 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.174 17:11:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.741 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.741 [2024-11-20 17:11:01.475798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:37.741 [2024-11-20 17:11:01.475934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.741 [2024-11-20 17:11:01.554292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.741 [2024-11-20 17:11:01.554370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.741 [2024-11-20 17:11:01.554400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.742 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88689 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88689 ']' 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88689 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88689 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.001 killing process with pid 88689 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88689' 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88689 00:18:38.001 [2024-11-20 17:11:01.650130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.001 17:11:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88689 00:18:38.001 [2024-11-20 17:11:01.665632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.937 17:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:38.937 00:18:38.937 real 0m5.578s 00:18:38.937 user 0m8.477s 00:18:38.937 sys 0m0.861s 00:18:38.937 17:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.937 17:11:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.937 ************************************ 00:18:38.937 END TEST raid_state_function_test_sb_md_interleaved 00:18:38.937 ************************************ 00:18:38.937 17:11:02 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:38.937 17:11:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:38.937 17:11:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.937 17:11:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:38.937 ************************************ 00:18:38.937 START TEST raid_superblock_test_md_interleaved 00:18:38.937 ************************************ 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88937 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88937 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88937 ']' 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.938 17:11:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.196 [2024-11-20 17:11:02.850332] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:39.196 [2024-11-20 17:11:02.850537] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88937 ] 00:18:39.196 [2024-11-20 17:11:03.043569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.455 [2024-11-20 17:11:03.205349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.714 [2024-11-20 17:11:03.413114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.714 [2024-11-20 17:11:03.413222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.282 malloc1 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.282 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 [2024-11-20 17:11:03.895065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.283 [2024-11-20 17:11:03.895154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.283 [2024-11-20 17:11:03.895185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:40.283 [2024-11-20 17:11:03.895200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.283 [2024-11-20 17:11:03.897729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.283 [2024-11-20 17:11:03.897800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.283 pt1 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 malloc2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 [2024-11-20 17:11:03.947856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.283 [2024-11-20 17:11:03.947932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.283 [2024-11-20 17:11:03.947977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:40.283 [2024-11-20 17:11:03.947992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.283 [2024-11-20 17:11:03.950624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.283 [2024-11-20 17:11:03.950685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.283 pt2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 [2024-11-20 17:11:03.959884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.283 [2024-11-20 17:11:03.962435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.283 [2024-11-20 17:11:03.962682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:40.283 [2024-11-20 17:11:03.962701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:40.283 [2024-11-20 17:11:03.962817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:40.283 [2024-11-20 17:11:03.962917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:40.283 [2024-11-20 17:11:03.962942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:40.283 [2024-11-20 17:11:03.963034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.283 17:11:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.283 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.283 "name": "raid_bdev1", 00:18:40.283 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:40.283 "strip_size_kb": 0, 00:18:40.283 "state": "online", 00:18:40.283 "raid_level": "raid1", 00:18:40.283 "superblock": true, 00:18:40.283 "num_base_bdevs": 2, 00:18:40.283 "num_base_bdevs_discovered": 2, 00:18:40.283 "num_base_bdevs_operational": 2, 00:18:40.283 "base_bdevs_list": [ 00:18:40.283 { 00:18:40.283 "name": "pt1", 00:18:40.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.283 "is_configured": true, 00:18:40.283 "data_offset": 256, 00:18:40.283 "data_size": 7936 00:18:40.283 }, 00:18:40.283 { 00:18:40.283 "name": "pt2", 00:18:40.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.283 "is_configured": true, 00:18:40.283 "data_offset": 256, 00:18:40.283 "data_size": 7936 00:18:40.283 } 00:18:40.283 ] 00:18:40.283 }' 00:18:40.283 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.283 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.851 [2024-11-20 17:11:04.528498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.851 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.851 "name": "raid_bdev1", 00:18:40.851 "aliases": [ 00:18:40.851 "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae" 00:18:40.851 ], 00:18:40.851 "product_name": "Raid Volume", 00:18:40.851 "block_size": 4128, 00:18:40.851 "num_blocks": 7936, 00:18:40.851 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:40.851 "md_size": 32, 00:18:40.851 "md_interleave": true, 00:18:40.851 "dif_type": 0, 00:18:40.851 "assigned_rate_limits": { 00:18:40.851 "rw_ios_per_sec": 0, 00:18:40.851 "rw_mbytes_per_sec": 0, 00:18:40.851 "r_mbytes_per_sec": 0, 00:18:40.851 "w_mbytes_per_sec": 0 00:18:40.851 }, 00:18:40.851 "claimed": false, 00:18:40.851 "zoned": false, 00:18:40.851 "supported_io_types": { 00:18:40.851 "read": true, 00:18:40.851 "write": true, 00:18:40.851 "unmap": false, 00:18:40.851 "flush": false, 00:18:40.851 "reset": true, 00:18:40.851 "nvme_admin": false, 00:18:40.851 "nvme_io": false, 00:18:40.851 "nvme_io_md": false, 00:18:40.851 "write_zeroes": true, 00:18:40.851 "zcopy": false, 00:18:40.851 "get_zone_info": false, 00:18:40.851 "zone_management": false, 00:18:40.851 "zone_append": false, 00:18:40.851 "compare": false, 00:18:40.851 "compare_and_write": false, 00:18:40.851 "abort": false, 00:18:40.851 "seek_hole": false, 00:18:40.851 "seek_data": false, 00:18:40.851 "copy": false, 00:18:40.851 "nvme_iov_md": false 00:18:40.851 }, 00:18:40.851 "memory_domains": [ 00:18:40.851 { 00:18:40.851 "dma_device_id": "system", 00:18:40.851 "dma_device_type": 1 00:18:40.851 }, 00:18:40.851 { 00:18:40.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.851 "dma_device_type": 2 00:18:40.851 }, 00:18:40.851 { 00:18:40.851 "dma_device_id": "system", 00:18:40.851 "dma_device_type": 1 00:18:40.851 }, 00:18:40.851 { 00:18:40.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.851 "dma_device_type": 2 00:18:40.851 } 00:18:40.851 ], 00:18:40.851 "driver_specific": { 00:18:40.851 "raid": { 00:18:40.851 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:40.851 "strip_size_kb": 0, 00:18:40.851 "state": "online", 00:18:40.851 "raid_level": "raid1", 00:18:40.851 "superblock": true, 00:18:40.851 "num_base_bdevs": 2, 00:18:40.851 "num_base_bdevs_discovered": 2, 00:18:40.851 "num_base_bdevs_operational": 2, 00:18:40.851 "base_bdevs_list": [ 00:18:40.851 { 00:18:40.851 "name": "pt1", 00:18:40.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.851 "is_configured": true, 00:18:40.851 "data_offset": 256, 00:18:40.851 "data_size": 7936 00:18:40.851 }, 00:18:40.851 { 00:18:40.851 "name": "pt2", 00:18:40.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.851 "is_configured": true, 00:18:40.852 "data_offset": 256, 00:18:40.852 "data_size": 7936 00:18:40.852 } 00:18:40.852 ] 00:18:40.852 } 00:18:40.852 } 00:18:40.852 }' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:40.852 pt2' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.852 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.110 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.110 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.110 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 [2024-11-20 17:11:04.776502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ba86598-75a3-49bd-9ec9-0780e9b4e0ae 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2ba86598-75a3-49bd-9ec9-0780e9b4e0ae ']' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 [2024-11-20 17:11:04.828049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.111 [2024-11-20 17:11:04.828153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.111 [2024-11-20 17:11:04.828238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.111 [2024-11-20 17:11:04.828360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.111 [2024-11-20 17:11:04.828393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 [2024-11-20 17:11:04.956185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:41.111 [2024-11-20 17:11:04.958843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:41.111 [2024-11-20 17:11:04.958964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:41.111 [2024-11-20 17:11:04.959030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:41.111 [2024-11-20 17:11:04.959054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.111 [2024-11-20 17:11:04.959066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:41.111 request: 00:18:41.111 { 00:18:41.111 "name": "raid_bdev1", 00:18:41.111 "raid_level": "raid1", 00:18:41.111 "base_bdevs": [ 00:18:41.111 "malloc1", 00:18:41.111 "malloc2" 00:18:41.111 ], 00:18:41.111 "superblock": false, 00:18:41.111 "method": "bdev_raid_create", 00:18:41.111 "req_id": 1 00:18:41.111 } 00:18:41.111 Got JSON-RPC error response 00:18:41.111 response: 00:18:41.111 { 00:18:41.111 "code": -17, 00:18:41.111 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:41.111 } 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:41.111 17:11:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.370 [2024-11-20 17:11:05.012208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.370 [2024-11-20 17:11:05.012282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.370 [2024-11-20 17:11:05.012320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:41.370 [2024-11-20 17:11:05.012335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.370 [2024-11-20 17:11:05.015246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.370 [2024-11-20 17:11:05.015313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.370 [2024-11-20 17:11:05.015366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:41.370 [2024-11-20 17:11:05.015426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.370 pt1 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.370 "name": "raid_bdev1", 00:18:41.370 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:41.370 "strip_size_kb": 0, 00:18:41.370 "state": "configuring", 00:18:41.370 "raid_level": "raid1", 00:18:41.370 "superblock": true, 00:18:41.370 "num_base_bdevs": 2, 00:18:41.370 "num_base_bdevs_discovered": 1, 00:18:41.370 "num_base_bdevs_operational": 2, 00:18:41.370 "base_bdevs_list": [ 00:18:41.370 { 00:18:41.370 "name": "pt1", 00:18:41.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.370 "is_configured": true, 00:18:41.370 "data_offset": 256, 00:18:41.370 "data_size": 7936 00:18:41.370 }, 00:18:41.370 { 00:18:41.370 "name": null, 00:18:41.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.370 "is_configured": false, 00:18:41.370 "data_offset": 256, 00:18:41.370 "data_size": 7936 00:18:41.370 } 00:18:41.370 ] 00:18:41.370 }' 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.370 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.938 [2024-11-20 17:11:05.564449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.938 [2024-11-20 17:11:05.564564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.938 [2024-11-20 17:11:05.564592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:41.938 [2024-11-20 17:11:05.564608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.938 [2024-11-20 17:11:05.564910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.938 [2024-11-20 17:11:05.564949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.938 [2024-11-20 17:11:05.565041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.938 [2024-11-20 17:11:05.565090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.938 [2024-11-20 17:11:05.565202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.938 [2024-11-20 17:11:05.565227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:41.938 [2024-11-20 17:11:05.565320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:41.938 [2024-11-20 17:11:05.565418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.938 [2024-11-20 17:11:05.565431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:41.938 [2024-11-20 17:11:05.565525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.938 pt2 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.938 "name": "raid_bdev1", 00:18:41.938 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:41.938 "strip_size_kb": 0, 00:18:41.938 "state": "online", 00:18:41.938 "raid_level": "raid1", 00:18:41.938 "superblock": true, 00:18:41.938 "num_base_bdevs": 2, 00:18:41.938 "num_base_bdevs_discovered": 2, 00:18:41.938 "num_base_bdevs_operational": 2, 00:18:41.938 "base_bdevs_list": [ 00:18:41.938 { 00:18:41.938 "name": "pt1", 00:18:41.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:41.938 "is_configured": true, 00:18:41.938 "data_offset": 256, 00:18:41.938 "data_size": 7936 00:18:41.938 }, 00:18:41.938 { 00:18:41.938 "name": "pt2", 00:18:41.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.938 "is_configured": true, 00:18:41.938 "data_offset": 256, 00:18:41.938 "data_size": 7936 00:18:41.938 } 00:18:41.938 ] 00:18:41.938 }' 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.938 17:11:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.506 [2024-11-20 17:11:06.113005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.506 "name": "raid_bdev1", 00:18:42.506 "aliases": [ 00:18:42.506 "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae" 00:18:42.506 ], 00:18:42.506 "product_name": "Raid Volume", 00:18:42.506 "block_size": 4128, 00:18:42.506 "num_blocks": 7936, 00:18:42.506 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:42.506 "md_size": 32, 00:18:42.506 "md_interleave": true, 00:18:42.506 "dif_type": 0, 00:18:42.506 "assigned_rate_limits": { 00:18:42.506 "rw_ios_per_sec": 0, 00:18:42.506 "rw_mbytes_per_sec": 0, 00:18:42.506 "r_mbytes_per_sec": 0, 00:18:42.506 "w_mbytes_per_sec": 0 00:18:42.506 }, 00:18:42.506 "claimed": false, 00:18:42.506 "zoned": false, 00:18:42.506 "supported_io_types": { 00:18:42.506 "read": true, 00:18:42.506 "write": true, 00:18:42.506 "unmap": false, 00:18:42.506 "flush": false, 00:18:42.506 "reset": true, 00:18:42.506 "nvme_admin": false, 00:18:42.506 "nvme_io": false, 00:18:42.506 "nvme_io_md": false, 00:18:42.506 "write_zeroes": true, 00:18:42.506 "zcopy": false, 00:18:42.506 "get_zone_info": false, 00:18:42.506 "zone_management": false, 00:18:42.506 "zone_append": false, 00:18:42.506 "compare": false, 00:18:42.506 "compare_and_write": false, 00:18:42.506 "abort": false, 00:18:42.506 "seek_hole": false, 00:18:42.506 "seek_data": false, 00:18:42.506 "copy": false, 00:18:42.506 "nvme_iov_md": false 00:18:42.506 }, 00:18:42.506 "memory_domains": [ 00:18:42.506 { 00:18:42.506 "dma_device_id": "system", 00:18:42.506 "dma_device_type": 1 00:18:42.506 }, 00:18:42.506 { 00:18:42.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.506 "dma_device_type": 2 00:18:42.506 }, 00:18:42.506 { 00:18:42.506 "dma_device_id": "system", 00:18:42.506 "dma_device_type": 1 00:18:42.506 }, 00:18:42.506 { 00:18:42.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.506 "dma_device_type": 2 00:18:42.506 } 00:18:42.506 ], 00:18:42.506 "driver_specific": { 00:18:42.506 "raid": { 00:18:42.506 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:42.506 "strip_size_kb": 0, 00:18:42.506 "state": "online", 00:18:42.506 "raid_level": "raid1", 00:18:42.506 "superblock": true, 00:18:42.506 "num_base_bdevs": 2, 00:18:42.506 "num_base_bdevs_discovered": 2, 00:18:42.506 "num_base_bdevs_operational": 2, 00:18:42.506 "base_bdevs_list": [ 00:18:42.506 { 00:18:42.506 "name": "pt1", 00:18:42.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.506 "is_configured": true, 00:18:42.506 "data_offset": 256, 00:18:42.506 "data_size": 7936 00:18:42.506 }, 00:18:42.506 { 00:18:42.506 "name": "pt2", 00:18:42.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.506 "is_configured": true, 00:18:42.506 "data_offset": 256, 00:18:42.506 "data_size": 7936 00:18:42.506 } 00:18:42.506 ] 00:18:42.506 } 00:18:42.506 } 00:18:42.506 }' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:42.506 pt2' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:42.506 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.766 [2024-11-20 17:11:06.381091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2ba86598-75a3-49bd-9ec9-0780e9b4e0ae '!=' 2ba86598-75a3-49bd-9ec9-0780e9b4e0ae ']' 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.766 [2024-11-20 17:11:06.432739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.766 "name": "raid_bdev1", 00:18:42.766 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:42.766 "strip_size_kb": 0, 00:18:42.766 "state": "online", 00:18:42.766 "raid_level": "raid1", 00:18:42.766 "superblock": true, 00:18:42.766 "num_base_bdevs": 2, 00:18:42.766 "num_base_bdevs_discovered": 1, 00:18:42.766 "num_base_bdevs_operational": 1, 00:18:42.766 "base_bdevs_list": [ 00:18:42.766 { 00:18:42.766 "name": null, 00:18:42.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.766 "is_configured": false, 00:18:42.766 "data_offset": 0, 00:18:42.766 "data_size": 7936 00:18:42.766 }, 00:18:42.766 { 00:18:42.766 "name": "pt2", 00:18:42.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.766 "is_configured": true, 00:18:42.766 "data_offset": 256, 00:18:42.766 "data_size": 7936 00:18:42.766 } 00:18:42.766 ] 00:18:42.766 }' 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.766 17:11:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 [2024-11-20 17:11:07.028939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.334 [2024-11-20 17:11:07.028972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.334 [2024-11-20 17:11:07.029084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.334 [2024-11-20 17:11:07.029184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.334 [2024-11-20 17:11:07.029230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 [2024-11-20 17:11:07.108943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.334 [2024-11-20 17:11:07.109018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.334 [2024-11-20 17:11:07.109041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:43.334 [2024-11-20 17:11:07.109056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.334 [2024-11-20 17:11:07.111925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.334 [2024-11-20 17:11:07.112013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.334 [2024-11-20 17:11:07.112111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:43.334 [2024-11-20 17:11:07.112184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.334 [2024-11-20 17:11:07.112265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:43.334 [2024-11-20 17:11:07.112284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.334 [2024-11-20 17:11:07.112399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.334 [2024-11-20 17:11:07.112499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:43.334 [2024-11-20 17:11:07.112512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:43.334 [2024-11-20 17:11:07.112585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.334 pt2 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.334 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.334 "name": "raid_bdev1", 00:18:43.334 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:43.334 "strip_size_kb": 0, 00:18:43.335 "state": "online", 00:18:43.335 "raid_level": "raid1", 00:18:43.335 "superblock": true, 00:18:43.335 "num_base_bdevs": 2, 00:18:43.335 "num_base_bdevs_discovered": 1, 00:18:43.335 "num_base_bdevs_operational": 1, 00:18:43.335 "base_bdevs_list": [ 00:18:43.335 { 00:18:43.335 "name": null, 00:18:43.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.335 "is_configured": false, 00:18:43.335 "data_offset": 256, 00:18:43.335 "data_size": 7936 00:18:43.335 }, 00:18:43.335 { 00:18:43.335 "name": "pt2", 00:18:43.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.335 "is_configured": true, 00:18:43.335 "data_offset": 256, 00:18:43.335 "data_size": 7936 00:18:43.335 } 00:18:43.335 ] 00:18:43.335 }' 00:18:43.335 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.335 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.903 [2024-11-20 17:11:07.641067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.903 [2024-11-20 17:11:07.641110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.903 [2024-11-20 17:11:07.641221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.903 [2024-11-20 17:11:07.641328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.903 [2024-11-20 17:11:07.641342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.903 [2024-11-20 17:11:07.705085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.903 [2024-11-20 17:11:07.705188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.903 [2024-11-20 17:11:07.705218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:43.903 [2024-11-20 17:11:07.705231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.903 [2024-11-20 17:11:07.707888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.903 [2024-11-20 17:11:07.707931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.903 [2024-11-20 17:11:07.708012] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:43.903 [2024-11-20 17:11:07.708063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.903 [2024-11-20 17:11:07.708227] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:43.903 [2024-11-20 17:11:07.708253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.903 [2024-11-20 17:11:07.708276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:43.903 [2024-11-20 17:11:07.708342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.903 [2024-11-20 17:11:07.708440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:43.903 [2024-11-20 17:11:07.708467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.903 [2024-11-20 17:11:07.708581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:43.903 [2024-11-20 17:11:07.708658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:43.903 [2024-11-20 17:11:07.708680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:43.903 [2024-11-20 17:11:07.708780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.903 pt1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.903 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.904 "name": "raid_bdev1", 00:18:43.904 "uuid": "2ba86598-75a3-49bd-9ec9-0780e9b4e0ae", 00:18:43.904 "strip_size_kb": 0, 00:18:43.904 "state": "online", 00:18:43.904 "raid_level": "raid1", 00:18:43.904 "superblock": true, 00:18:43.904 "num_base_bdevs": 2, 00:18:43.904 "num_base_bdevs_discovered": 1, 00:18:43.904 "num_base_bdevs_operational": 1, 00:18:43.904 "base_bdevs_list": [ 00:18:43.904 { 00:18:43.904 "name": null, 00:18:43.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.904 "is_configured": false, 00:18:43.904 "data_offset": 256, 00:18:43.904 "data_size": 7936 00:18:43.904 }, 00:18:43.904 { 00:18:43.904 "name": "pt2", 00:18:43.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.904 "is_configured": true, 00:18:43.904 "data_offset": 256, 00:18:43.904 "data_size": 7936 00:18:43.904 } 00:18:43.904 ] 00:18:43.904 }' 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.904 17:11:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.508 [2024-11-20 17:11:08.309600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2ba86598-75a3-49bd-9ec9-0780e9b4e0ae '!=' 2ba86598-75a3-49bd-9ec9-0780e9b4e0ae ']' 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88937 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88937 ']' 00:18:44.508 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88937 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88937 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.791 killing process with pid 88937 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88937' 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88937 00:18:44.791 [2024-11-20 17:11:08.385376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.791 [2024-11-20 17:11:08.385478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.791 17:11:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88937 00:18:44.791 [2024-11-20 17:11:08.385560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.791 [2024-11-20 17:11:08.385583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:44.791 [2024-11-20 17:11:08.546499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.727 17:11:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:45.727 00:18:45.727 real 0m6.827s 00:18:45.727 user 0m10.845s 00:18:45.727 sys 0m1.083s 00:18:45.727 17:11:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.727 17:11:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.727 ************************************ 00:18:45.727 END TEST raid_superblock_test_md_interleaved 00:18:45.727 ************************************ 00:18:45.986 17:11:09 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:45.986 17:11:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:45.986 17:11:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.986 17:11:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:45.986 ************************************ 00:18:45.986 START TEST raid_rebuild_test_sb_md_interleaved 00:18:45.986 ************************************ 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89271 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89271 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89271 ']' 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.986 17:11:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.986 [2024-11-20 17:11:09.787200] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:18:45.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:45.986 Zero copy mechanism will not be used. 00:18:45.986 [2024-11-20 17:11:09.787453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89271 ] 00:18:46.245 [2024-11-20 17:11:09.997848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.503 [2024-11-20 17:11:10.129430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.503 [2024-11-20 17:11:10.343213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.503 [2024-11-20 17:11:10.343289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.069 BaseBdev1_malloc 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.069 [2024-11-20 17:11:10.838495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.069 [2024-11-20 17:11:10.838597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.069 [2024-11-20 17:11:10.838646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:47.069 [2024-11-20 17:11:10.838666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.069 [2024-11-20 17:11:10.841331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.069 [2024-11-20 17:11:10.841392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.069 BaseBdev1 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.069 BaseBdev2_malloc 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.069 [2024-11-20 17:11:10.885115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:47.069 [2024-11-20 17:11:10.885218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.069 [2024-11-20 17:11:10.885247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:47.069 [2024-11-20 17:11:10.885273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.069 [2024-11-20 17:11:10.887875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.069 [2024-11-20 17:11:10.887937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:47.069 BaseBdev2 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.069 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.328 spare_malloc 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.328 spare_delay 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.328 [2024-11-20 17:11:10.960680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.328 [2024-11-20 17:11:10.960832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.328 [2024-11-20 17:11:10.960864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:47.328 [2024-11-20 17:11:10.960892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.328 [2024-11-20 17:11:10.963787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.328 [2024-11-20 17:11:10.963840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.328 spare 00:18:47.328 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.329 [2024-11-20 17:11:10.968839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.329 [2024-11-20 17:11:10.971578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.329 [2024-11-20 17:11:10.971883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:47.329 [2024-11-20 17:11:10.971913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:47.329 [2024-11-20 17:11:10.972027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:47.329 [2024-11-20 17:11:10.972166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:47.329 [2024-11-20 17:11:10.972190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:47.329 [2024-11-20 17:11:10.972290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.329 17:11:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.329 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.329 "name": "raid_bdev1", 00:18:47.329 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:47.329 "strip_size_kb": 0, 00:18:47.329 "state": "online", 00:18:47.329 "raid_level": "raid1", 00:18:47.329 "superblock": true, 00:18:47.329 "num_base_bdevs": 2, 00:18:47.329 "num_base_bdevs_discovered": 2, 00:18:47.329 "num_base_bdevs_operational": 2, 00:18:47.329 "base_bdevs_list": [ 00:18:47.329 { 00:18:47.329 "name": "BaseBdev1", 00:18:47.329 "uuid": "fb12e6e5-8897-561e-baa9-634fd2593592", 00:18:47.329 "is_configured": true, 00:18:47.329 "data_offset": 256, 00:18:47.329 "data_size": 7936 00:18:47.329 }, 00:18:47.329 { 00:18:47.329 "name": "BaseBdev2", 00:18:47.329 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:47.329 "is_configured": true, 00:18:47.329 "data_offset": 256, 00:18:47.329 "data_size": 7936 00:18:47.329 } 00:18:47.329 ] 00:18:47.329 }' 00:18:47.329 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.329 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:47.897 [2024-11-20 17:11:11.477367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.897 [2024-11-20 17:11:11.580987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.897 "name": "raid_bdev1", 00:18:47.897 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:47.897 "strip_size_kb": 0, 00:18:47.897 "state": "online", 00:18:47.897 "raid_level": "raid1", 00:18:47.897 "superblock": true, 00:18:47.897 "num_base_bdevs": 2, 00:18:47.897 "num_base_bdevs_discovered": 1, 00:18:47.897 "num_base_bdevs_operational": 1, 00:18:47.897 "base_bdevs_list": [ 00:18:47.897 { 00:18:47.897 "name": null, 00:18:47.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.897 "is_configured": false, 00:18:47.897 "data_offset": 0, 00:18:47.897 "data_size": 7936 00:18:47.897 }, 00:18:47.897 { 00:18:47.897 "name": "BaseBdev2", 00:18:47.897 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:47.897 "is_configured": true, 00:18:47.897 "data_offset": 256, 00:18:47.897 "data_size": 7936 00:18:47.897 } 00:18:47.897 ] 00:18:47.897 }' 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.897 17:11:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.465 17:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.465 17:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.465 17:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.465 [2024-11-20 17:11:12.085154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.465 [2024-11-20 17:11:12.101985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:48.465 17:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.465 17:11:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:48.465 [2024-11-20 17:11:12.104557] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.401 "name": "raid_bdev1", 00:18:49.401 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:49.401 "strip_size_kb": 0, 00:18:49.401 "state": "online", 00:18:49.401 "raid_level": "raid1", 00:18:49.401 "superblock": true, 00:18:49.401 "num_base_bdevs": 2, 00:18:49.401 "num_base_bdevs_discovered": 2, 00:18:49.401 "num_base_bdevs_operational": 2, 00:18:49.401 "process": { 00:18:49.401 "type": "rebuild", 00:18:49.401 "target": "spare", 00:18:49.401 "progress": { 00:18:49.401 "blocks": 2560, 00:18:49.401 "percent": 32 00:18:49.401 } 00:18:49.401 }, 00:18:49.401 "base_bdevs_list": [ 00:18:49.401 { 00:18:49.401 "name": "spare", 00:18:49.401 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:49.401 "is_configured": true, 00:18:49.401 "data_offset": 256, 00:18:49.401 "data_size": 7936 00:18:49.401 }, 00:18:49.401 { 00:18:49.401 "name": "BaseBdev2", 00:18:49.401 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:49.401 "is_configured": true, 00:18:49.401 "data_offset": 256, 00:18:49.401 "data_size": 7936 00:18:49.401 } 00:18:49.401 ] 00:18:49.401 }' 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.401 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.401 [2024-11-20 17:11:13.265875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.660 [2024-11-20 17:11:13.313788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.660 [2024-11-20 17:11:13.314113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.660 [2024-11-20 17:11:13.314141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.660 [2024-11-20 17:11:13.314157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.660 "name": "raid_bdev1", 00:18:49.660 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:49.660 "strip_size_kb": 0, 00:18:49.660 "state": "online", 00:18:49.660 "raid_level": "raid1", 00:18:49.660 "superblock": true, 00:18:49.660 "num_base_bdevs": 2, 00:18:49.660 "num_base_bdevs_discovered": 1, 00:18:49.660 "num_base_bdevs_operational": 1, 00:18:49.660 "base_bdevs_list": [ 00:18:49.660 { 00:18:49.660 "name": null, 00:18:49.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.660 "is_configured": false, 00:18:49.660 "data_offset": 0, 00:18:49.660 "data_size": 7936 00:18:49.660 }, 00:18:49.660 { 00:18:49.660 "name": "BaseBdev2", 00:18:49.660 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:49.660 "is_configured": true, 00:18:49.660 "data_offset": 256, 00:18:49.660 "data_size": 7936 00:18:49.660 } 00:18:49.660 ] 00:18:49.660 }' 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.660 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.228 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.229 "name": "raid_bdev1", 00:18:50.229 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:50.229 "strip_size_kb": 0, 00:18:50.229 "state": "online", 00:18:50.229 "raid_level": "raid1", 00:18:50.229 "superblock": true, 00:18:50.229 "num_base_bdevs": 2, 00:18:50.229 "num_base_bdevs_discovered": 1, 00:18:50.229 "num_base_bdevs_operational": 1, 00:18:50.229 "base_bdevs_list": [ 00:18:50.229 { 00:18:50.229 "name": null, 00:18:50.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.229 "is_configured": false, 00:18:50.229 "data_offset": 0, 00:18:50.229 "data_size": 7936 00:18:50.229 }, 00:18:50.229 { 00:18:50.229 "name": "BaseBdev2", 00:18:50.229 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:50.229 "is_configured": true, 00:18:50.229 "data_offset": 256, 00:18:50.229 "data_size": 7936 00:18:50.229 } 00:18:50.229 ] 00:18:50.229 }' 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.229 17:11:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.229 [2024-11-20 17:11:14.039713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.229 [2024-11-20 17:11:14.056467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.229 17:11:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.229 [2024-11-20 17:11:14.059387] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.605 "name": "raid_bdev1", 00:18:51.605 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:51.605 "strip_size_kb": 0, 00:18:51.605 "state": "online", 00:18:51.605 "raid_level": "raid1", 00:18:51.605 "superblock": true, 00:18:51.605 "num_base_bdevs": 2, 00:18:51.605 "num_base_bdevs_discovered": 2, 00:18:51.605 "num_base_bdevs_operational": 2, 00:18:51.605 "process": { 00:18:51.605 "type": "rebuild", 00:18:51.605 "target": "spare", 00:18:51.605 "progress": { 00:18:51.605 "blocks": 2560, 00:18:51.605 "percent": 32 00:18:51.605 } 00:18:51.605 }, 00:18:51.605 "base_bdevs_list": [ 00:18:51.605 { 00:18:51.605 "name": "spare", 00:18:51.605 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:51.605 "is_configured": true, 00:18:51.605 "data_offset": 256, 00:18:51.605 "data_size": 7936 00:18:51.605 }, 00:18:51.605 { 00:18:51.605 "name": "BaseBdev2", 00:18:51.605 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:51.605 "is_configured": true, 00:18:51.605 "data_offset": 256, 00:18:51.605 "data_size": 7936 00:18:51.605 } 00:18:51.605 ] 00:18:51.605 }' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:51.605 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=791 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.605 "name": "raid_bdev1", 00:18:51.605 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:51.605 "strip_size_kb": 0, 00:18:51.605 "state": "online", 00:18:51.605 "raid_level": "raid1", 00:18:51.605 "superblock": true, 00:18:51.605 "num_base_bdevs": 2, 00:18:51.605 "num_base_bdevs_discovered": 2, 00:18:51.605 "num_base_bdevs_operational": 2, 00:18:51.605 "process": { 00:18:51.605 "type": "rebuild", 00:18:51.605 "target": "spare", 00:18:51.605 "progress": { 00:18:51.605 "blocks": 2816, 00:18:51.605 "percent": 35 00:18:51.605 } 00:18:51.605 }, 00:18:51.605 "base_bdevs_list": [ 00:18:51.605 { 00:18:51.605 "name": "spare", 00:18:51.605 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:51.605 "is_configured": true, 00:18:51.605 "data_offset": 256, 00:18:51.605 "data_size": 7936 00:18:51.605 }, 00:18:51.605 { 00:18:51.605 "name": "BaseBdev2", 00:18:51.605 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:51.605 "is_configured": true, 00:18:51.605 "data_offset": 256, 00:18:51.605 "data_size": 7936 00:18:51.605 } 00:18:51.605 ] 00:18:51.605 }' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.605 17:11:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.540 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.798 "name": "raid_bdev1", 00:18:52.798 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:52.798 "strip_size_kb": 0, 00:18:52.798 "state": "online", 00:18:52.798 "raid_level": "raid1", 00:18:52.798 "superblock": true, 00:18:52.798 "num_base_bdevs": 2, 00:18:52.798 "num_base_bdevs_discovered": 2, 00:18:52.798 "num_base_bdevs_operational": 2, 00:18:52.798 "process": { 00:18:52.798 "type": "rebuild", 00:18:52.798 "target": "spare", 00:18:52.798 "progress": { 00:18:52.798 "blocks": 5888, 00:18:52.798 "percent": 74 00:18:52.798 } 00:18:52.798 }, 00:18:52.798 "base_bdevs_list": [ 00:18:52.798 { 00:18:52.798 "name": "spare", 00:18:52.798 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:52.798 "is_configured": true, 00:18:52.798 "data_offset": 256, 00:18:52.798 "data_size": 7936 00:18:52.798 }, 00:18:52.798 { 00:18:52.798 "name": "BaseBdev2", 00:18:52.798 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:52.798 "is_configured": true, 00:18:52.798 "data_offset": 256, 00:18:52.798 "data_size": 7936 00:18:52.798 } 00:18:52.798 ] 00:18:52.798 }' 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.798 17:11:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.365 [2024-11-20 17:11:17.182454] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:53.365 [2024-11-20 17:11:17.182533] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:53.365 [2024-11-20 17:11:17.182726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.934 "name": "raid_bdev1", 00:18:53.934 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:53.934 "strip_size_kb": 0, 00:18:53.934 "state": "online", 00:18:53.934 "raid_level": "raid1", 00:18:53.934 "superblock": true, 00:18:53.934 "num_base_bdevs": 2, 00:18:53.934 "num_base_bdevs_discovered": 2, 00:18:53.934 "num_base_bdevs_operational": 2, 00:18:53.934 "base_bdevs_list": [ 00:18:53.934 { 00:18:53.934 "name": "spare", 00:18:53.934 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:53.934 "is_configured": true, 00:18:53.934 "data_offset": 256, 00:18:53.934 "data_size": 7936 00:18:53.934 }, 00:18:53.934 { 00:18:53.934 "name": "BaseBdev2", 00:18:53.934 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:53.934 "is_configured": true, 00:18:53.934 "data_offset": 256, 00:18:53.934 "data_size": 7936 00:18:53.934 } 00:18:53.934 ] 00:18:53.934 }' 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.934 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.193 "name": "raid_bdev1", 00:18:54.193 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:54.193 "strip_size_kb": 0, 00:18:54.193 "state": "online", 00:18:54.193 "raid_level": "raid1", 00:18:54.193 "superblock": true, 00:18:54.193 "num_base_bdevs": 2, 00:18:54.193 "num_base_bdevs_discovered": 2, 00:18:54.193 "num_base_bdevs_operational": 2, 00:18:54.193 "base_bdevs_list": [ 00:18:54.193 { 00:18:54.193 "name": "spare", 00:18:54.193 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:54.193 "is_configured": true, 00:18:54.193 "data_offset": 256, 00:18:54.193 "data_size": 7936 00:18:54.193 }, 00:18:54.193 { 00:18:54.193 "name": "BaseBdev2", 00:18:54.193 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:54.193 "is_configured": true, 00:18:54.193 "data_offset": 256, 00:18:54.193 "data_size": 7936 00:18:54.193 } 00:18:54.193 ] 00:18:54.193 }' 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.193 "name": "raid_bdev1", 00:18:54.193 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:54.193 "strip_size_kb": 0, 00:18:54.193 "state": "online", 00:18:54.193 "raid_level": "raid1", 00:18:54.193 "superblock": true, 00:18:54.193 "num_base_bdevs": 2, 00:18:54.193 "num_base_bdevs_discovered": 2, 00:18:54.193 "num_base_bdevs_operational": 2, 00:18:54.193 "base_bdevs_list": [ 00:18:54.193 { 00:18:54.193 "name": "spare", 00:18:54.193 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:54.193 "is_configured": true, 00:18:54.193 "data_offset": 256, 00:18:54.193 "data_size": 7936 00:18:54.193 }, 00:18:54.193 { 00:18:54.193 "name": "BaseBdev2", 00:18:54.193 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:54.193 "is_configured": true, 00:18:54.193 "data_offset": 256, 00:18:54.193 "data_size": 7936 00:18:54.193 } 00:18:54.193 ] 00:18:54.193 }' 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.193 17:11:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 [2024-11-20 17:11:18.369542] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.759 [2024-11-20 17:11:18.369723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.759 [2024-11-20 17:11:18.370018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.759 [2024-11-20 17:11:18.370259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.759 [2024-11-20 17:11:18.370396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.759 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 [2024-11-20 17:11:18.445579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.759 [2024-11-20 17:11:18.445649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.759 [2024-11-20 17:11:18.445679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:54.759 [2024-11-20 17:11:18.445691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.759 [2024-11-20 17:11:18.448698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.759 spare 00:18:54.759 [2024-11-20 17:11:18.448911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.760 [2024-11-20 17:11:18.449005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:54.760 [2024-11-20 17:11:18.449075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.760 [2024-11-20 17:11:18.449251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.760 [2024-11-20 17:11:18.549355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:54.760 [2024-11-20 17:11:18.549547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:54.760 [2024-11-20 17:11:18.549690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:54.760 [2024-11-20 17:11:18.549995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:54.760 [2024-11-20 17:11:18.550135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:54.760 [2024-11-20 17:11:18.550273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.760 "name": "raid_bdev1", 00:18:54.760 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:54.760 "strip_size_kb": 0, 00:18:54.760 "state": "online", 00:18:54.760 "raid_level": "raid1", 00:18:54.760 "superblock": true, 00:18:54.760 "num_base_bdevs": 2, 00:18:54.760 "num_base_bdevs_discovered": 2, 00:18:54.760 "num_base_bdevs_operational": 2, 00:18:54.760 "base_bdevs_list": [ 00:18:54.760 { 00:18:54.760 "name": "spare", 00:18:54.760 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:54.760 "is_configured": true, 00:18:54.760 "data_offset": 256, 00:18:54.760 "data_size": 7936 00:18:54.760 }, 00:18:54.760 { 00:18:54.760 "name": "BaseBdev2", 00:18:54.760 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:54.760 "is_configured": true, 00:18:54.760 "data_offset": 256, 00:18:54.760 "data_size": 7936 00:18:54.760 } 00:18:54.760 ] 00:18:54.760 }' 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.760 17:11:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.326 "name": "raid_bdev1", 00:18:55.326 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:55.326 "strip_size_kb": 0, 00:18:55.326 "state": "online", 00:18:55.326 "raid_level": "raid1", 00:18:55.326 "superblock": true, 00:18:55.326 "num_base_bdevs": 2, 00:18:55.326 "num_base_bdevs_discovered": 2, 00:18:55.326 "num_base_bdevs_operational": 2, 00:18:55.326 "base_bdevs_list": [ 00:18:55.326 { 00:18:55.326 "name": "spare", 00:18:55.326 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:55.326 "is_configured": true, 00:18:55.326 "data_offset": 256, 00:18:55.326 "data_size": 7936 00:18:55.326 }, 00:18:55.326 { 00:18:55.326 "name": "BaseBdev2", 00:18:55.326 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:55.326 "is_configured": true, 00:18:55.326 "data_offset": 256, 00:18:55.326 "data_size": 7936 00:18:55.326 } 00:18:55.326 ] 00:18:55.326 }' 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.326 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.585 [2024-11-20 17:11:19.274530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.585 "name": "raid_bdev1", 00:18:55.585 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:55.585 "strip_size_kb": 0, 00:18:55.585 "state": "online", 00:18:55.585 "raid_level": "raid1", 00:18:55.585 "superblock": true, 00:18:55.585 "num_base_bdevs": 2, 00:18:55.585 "num_base_bdevs_discovered": 1, 00:18:55.585 "num_base_bdevs_operational": 1, 00:18:55.585 "base_bdevs_list": [ 00:18:55.585 { 00:18:55.585 "name": null, 00:18:55.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.585 "is_configured": false, 00:18:55.585 "data_offset": 0, 00:18:55.585 "data_size": 7936 00:18:55.585 }, 00:18:55.585 { 00:18:55.585 "name": "BaseBdev2", 00:18:55.585 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:55.585 "is_configured": true, 00:18:55.585 "data_offset": 256, 00:18:55.585 "data_size": 7936 00:18:55.585 } 00:18:55.585 ] 00:18:55.585 }' 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.585 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.150 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.150 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 [2024-11-20 17:11:19.790611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.150 [2024-11-20 17:11:19.790832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:56.150 [2024-11-20 17:11:19.790891] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:56.150 [2024-11-20 17:11:19.790933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.150 [2024-11-20 17:11:19.807103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:56.150 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.150 17:11:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:56.150 [2024-11-20 17:11:19.809617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.114 "name": "raid_bdev1", 00:18:57.114 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:57.114 "strip_size_kb": 0, 00:18:57.114 "state": "online", 00:18:57.114 "raid_level": "raid1", 00:18:57.114 "superblock": true, 00:18:57.114 "num_base_bdevs": 2, 00:18:57.114 "num_base_bdevs_discovered": 2, 00:18:57.114 "num_base_bdevs_operational": 2, 00:18:57.114 "process": { 00:18:57.114 "type": "rebuild", 00:18:57.114 "target": "spare", 00:18:57.114 "progress": { 00:18:57.114 "blocks": 2560, 00:18:57.114 "percent": 32 00:18:57.114 } 00:18:57.114 }, 00:18:57.114 "base_bdevs_list": [ 00:18:57.114 { 00:18:57.114 "name": "spare", 00:18:57.114 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:57.114 "is_configured": true, 00:18:57.114 "data_offset": 256, 00:18:57.114 "data_size": 7936 00:18:57.114 }, 00:18:57.114 { 00:18:57.114 "name": "BaseBdev2", 00:18:57.114 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:57.114 "is_configured": true, 00:18:57.114 "data_offset": 256, 00:18:57.114 "data_size": 7936 00:18:57.114 } 00:18:57.114 ] 00:18:57.114 }' 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.114 17:11:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.114 [2024-11-20 17:11:20.978552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.373 [2024-11-20 17:11:21.017509] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:57.373 [2024-11-20 17:11:21.017721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.373 [2024-11-20 17:11:21.017749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.373 [2024-11-20 17:11:21.017783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:57.373 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.374 "name": "raid_bdev1", 00:18:57.374 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:57.374 "strip_size_kb": 0, 00:18:57.374 "state": "online", 00:18:57.374 "raid_level": "raid1", 00:18:57.374 "superblock": true, 00:18:57.374 "num_base_bdevs": 2, 00:18:57.374 "num_base_bdevs_discovered": 1, 00:18:57.374 "num_base_bdevs_operational": 1, 00:18:57.374 "base_bdevs_list": [ 00:18:57.374 { 00:18:57.374 "name": null, 00:18:57.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.374 "is_configured": false, 00:18:57.374 "data_offset": 0, 00:18:57.374 "data_size": 7936 00:18:57.374 }, 00:18:57.374 { 00:18:57.374 "name": "BaseBdev2", 00:18:57.374 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:57.374 "is_configured": true, 00:18:57.374 "data_offset": 256, 00:18:57.374 "data_size": 7936 00:18:57.374 } 00:18:57.374 ] 00:18:57.374 }' 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.374 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.941 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.941 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.941 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.941 [2024-11-20 17:11:21.586369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.941 [2024-11-20 17:11:21.586667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.941 [2024-11-20 17:11:21.586742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:57.941 [2024-11-20 17:11:21.587028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.941 [2024-11-20 17:11:21.587348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.941 [2024-11-20 17:11:21.587386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.941 [2024-11-20 17:11:21.587458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.941 [2024-11-20 17:11:21.587480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.941 [2024-11-20 17:11:21.587506] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.941 [2024-11-20 17:11:21.587545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.941 [2024-11-20 17:11:21.602745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:57.941 spare 00:18:57.941 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.941 17:11:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:57.941 [2024-11-20 17:11:21.605484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.875 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.876 "name": "raid_bdev1", 00:18:58.876 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:58.876 "strip_size_kb": 0, 00:18:58.876 "state": "online", 00:18:58.876 "raid_level": "raid1", 00:18:58.876 "superblock": true, 00:18:58.876 "num_base_bdevs": 2, 00:18:58.876 "num_base_bdevs_discovered": 2, 00:18:58.876 "num_base_bdevs_operational": 2, 00:18:58.876 "process": { 00:18:58.876 "type": "rebuild", 00:18:58.876 "target": "spare", 00:18:58.876 "progress": { 00:18:58.876 "blocks": 2560, 00:18:58.876 "percent": 32 00:18:58.876 } 00:18:58.876 }, 00:18:58.876 "base_bdevs_list": [ 00:18:58.876 { 00:18:58.876 "name": "spare", 00:18:58.876 "uuid": "febe0287-bd42-508c-b997-8ae809dcee66", 00:18:58.876 "is_configured": true, 00:18:58.876 "data_offset": 256, 00:18:58.876 "data_size": 7936 00:18:58.876 }, 00:18:58.876 { 00:18:58.876 "name": "BaseBdev2", 00:18:58.876 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:58.876 "is_configured": true, 00:18:58.876 "data_offset": 256, 00:18:58.876 "data_size": 7936 00:18:58.876 } 00:18:58.876 ] 00:18:58.876 }' 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.876 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.134 [2024-11-20 17:11:22.778589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.134 [2024-11-20 17:11:22.814561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.134 [2024-11-20 17:11:22.814823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.134 [2024-11-20 17:11:22.814857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.134 [2024-11-20 17:11:22.814877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.134 "name": "raid_bdev1", 00:18:59.134 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:59.134 "strip_size_kb": 0, 00:18:59.134 "state": "online", 00:18:59.134 "raid_level": "raid1", 00:18:59.134 "superblock": true, 00:18:59.134 "num_base_bdevs": 2, 00:18:59.134 "num_base_bdevs_discovered": 1, 00:18:59.134 "num_base_bdevs_operational": 1, 00:18:59.134 "base_bdevs_list": [ 00:18:59.134 { 00:18:59.134 "name": null, 00:18:59.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.134 "is_configured": false, 00:18:59.134 "data_offset": 0, 00:18:59.134 "data_size": 7936 00:18:59.134 }, 00:18:59.134 { 00:18:59.134 "name": "BaseBdev2", 00:18:59.134 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:59.134 "is_configured": true, 00:18:59.134 "data_offset": 256, 00:18:59.134 "data_size": 7936 00:18:59.134 } 00:18:59.134 ] 00:18:59.134 }' 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.134 17:11:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.700 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.701 "name": "raid_bdev1", 00:18:59.701 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:18:59.701 "strip_size_kb": 0, 00:18:59.701 "state": "online", 00:18:59.701 "raid_level": "raid1", 00:18:59.701 "superblock": true, 00:18:59.701 "num_base_bdevs": 2, 00:18:59.701 "num_base_bdevs_discovered": 1, 00:18:59.701 "num_base_bdevs_operational": 1, 00:18:59.701 "base_bdevs_list": [ 00:18:59.701 { 00:18:59.701 "name": null, 00:18:59.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.701 "is_configured": false, 00:18:59.701 "data_offset": 0, 00:18:59.701 "data_size": 7936 00:18:59.701 }, 00:18:59.701 { 00:18:59.701 "name": "BaseBdev2", 00:18:59.701 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:18:59.701 "is_configured": true, 00:18:59.701 "data_offset": 256, 00:18:59.701 "data_size": 7936 00:18:59.701 } 00:18:59.701 ] 00:18:59.701 }' 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 [2024-11-20 17:11:23.539886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.701 [2024-11-20 17:11:23.540094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.701 [2024-11-20 17:11:23.540138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:59.701 [2024-11-20 17:11:23.540154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.701 [2024-11-20 17:11:23.540407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.701 [2024-11-20 17:11:23.540431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.701 [2024-11-20 17:11:23.540500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:59.701 [2024-11-20 17:11:23.540532] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.701 [2024-11-20 17:11:23.540545] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:59.701 [2024-11-20 17:11:23.540568] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:59.701 BaseBdev1 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.701 17:11:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.073 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.073 "name": "raid_bdev1", 00:19:01.073 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:19:01.073 "strip_size_kb": 0, 00:19:01.073 "state": "online", 00:19:01.073 "raid_level": "raid1", 00:19:01.073 "superblock": true, 00:19:01.073 "num_base_bdevs": 2, 00:19:01.073 "num_base_bdevs_discovered": 1, 00:19:01.074 "num_base_bdevs_operational": 1, 00:19:01.074 "base_bdevs_list": [ 00:19:01.074 { 00:19:01.074 "name": null, 00:19:01.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.074 "is_configured": false, 00:19:01.074 "data_offset": 0, 00:19:01.074 "data_size": 7936 00:19:01.074 }, 00:19:01.074 { 00:19:01.074 "name": "BaseBdev2", 00:19:01.074 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:19:01.074 "is_configured": true, 00:19:01.074 "data_offset": 256, 00:19:01.074 "data_size": 7936 00:19:01.074 } 00:19:01.074 ] 00:19:01.074 }' 00:19:01.074 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.074 17:11:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.332 "name": "raid_bdev1", 00:19:01.332 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:19:01.332 "strip_size_kb": 0, 00:19:01.332 "state": "online", 00:19:01.332 "raid_level": "raid1", 00:19:01.332 "superblock": true, 00:19:01.332 "num_base_bdevs": 2, 00:19:01.332 "num_base_bdevs_discovered": 1, 00:19:01.332 "num_base_bdevs_operational": 1, 00:19:01.332 "base_bdevs_list": [ 00:19:01.332 { 00:19:01.332 "name": null, 00:19:01.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.332 "is_configured": false, 00:19:01.332 "data_offset": 0, 00:19:01.332 "data_size": 7936 00:19:01.332 }, 00:19:01.332 { 00:19:01.332 "name": "BaseBdev2", 00:19:01.332 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:19:01.332 "is_configured": true, 00:19:01.332 "data_offset": 256, 00:19:01.332 "data_size": 7936 00:19:01.332 } 00:19:01.332 ] 00:19:01.332 }' 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.332 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.590 [2024-11-20 17:11:25.248484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.590 [2024-11-20 17:11:25.248915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.590 [2024-11-20 17:11:25.248965] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.590 request: 00:19:01.590 { 00:19:01.590 "base_bdev": "BaseBdev1", 00:19:01.590 "raid_bdev": "raid_bdev1", 00:19:01.590 "method": "bdev_raid_add_base_bdev", 00:19:01.590 "req_id": 1 00:19:01.590 } 00:19:01.590 Got JSON-RPC error response 00:19:01.590 response: 00:19:01.590 { 00:19:01.590 "code": -22, 00:19:01.590 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:01.590 } 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.590 17:11:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.524 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.524 "name": "raid_bdev1", 00:19:02.524 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:19:02.524 "strip_size_kb": 0, 00:19:02.524 "state": "online", 00:19:02.524 "raid_level": "raid1", 00:19:02.524 "superblock": true, 00:19:02.524 "num_base_bdevs": 2, 00:19:02.524 "num_base_bdevs_discovered": 1, 00:19:02.524 "num_base_bdevs_operational": 1, 00:19:02.524 "base_bdevs_list": [ 00:19:02.524 { 00:19:02.525 "name": null, 00:19:02.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.525 "is_configured": false, 00:19:02.525 "data_offset": 0, 00:19:02.525 "data_size": 7936 00:19:02.525 }, 00:19:02.525 { 00:19:02.525 "name": "BaseBdev2", 00:19:02.525 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:19:02.525 "is_configured": true, 00:19:02.525 "data_offset": 256, 00:19:02.525 "data_size": 7936 00:19:02.525 } 00:19:02.525 ] 00:19:02.525 }' 00:19:02.525 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.525 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.092 "name": "raid_bdev1", 00:19:03.092 "uuid": "ccb1d6be-7cfc-447c-b91e-f87c8f6cc13d", 00:19:03.092 "strip_size_kb": 0, 00:19:03.092 "state": "online", 00:19:03.092 "raid_level": "raid1", 00:19:03.092 "superblock": true, 00:19:03.092 "num_base_bdevs": 2, 00:19:03.092 "num_base_bdevs_discovered": 1, 00:19:03.092 "num_base_bdevs_operational": 1, 00:19:03.092 "base_bdevs_list": [ 00:19:03.092 { 00:19:03.092 "name": null, 00:19:03.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.092 "is_configured": false, 00:19:03.092 "data_offset": 0, 00:19:03.092 "data_size": 7936 00:19:03.092 }, 00:19:03.092 { 00:19:03.092 "name": "BaseBdev2", 00:19:03.092 "uuid": "fd1b70fd-e9d3-5fa2-8055-8f8da9ea3f76", 00:19:03.092 "is_configured": true, 00:19:03.092 "data_offset": 256, 00:19:03.092 "data_size": 7936 00:19:03.092 } 00:19:03.092 ] 00:19:03.092 }' 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89271 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89271 ']' 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89271 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:03.092 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89271 00:19:03.350 killing process with pid 89271 00:19:03.350 Received shutdown signal, test time was about 60.000000 seconds 00:19:03.350 00:19:03.350 Latency(us) 00:19:03.350 [2024-11-20T17:11:27.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.350 [2024-11-20T17:11:27.219Z] =================================================================================================================== 00:19:03.350 [2024-11-20T17:11:27.219Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89271' 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89271 00:19:03.350 17:11:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89271 00:19:03.350 [2024-11-20 17:11:26.990386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.350 [2024-11-20 17:11:26.990613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.350 [2024-11-20 17:11:26.990680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.350 [2024-11-20 17:11:26.990700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:03.608 [2024-11-20 17:11:27.256942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.541 17:11:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:04.541 00:19:04.541 real 0m18.642s 00:19:04.541 user 0m25.472s 00:19:04.541 sys 0m1.493s 00:19:04.541 17:11:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.541 17:11:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.541 ************************************ 00:19:04.541 END TEST raid_rebuild_test_sb_md_interleaved 00:19:04.541 ************************************ 00:19:04.541 17:11:28 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:04.541 17:11:28 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:04.541 17:11:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89271 ']' 00:19:04.541 17:11:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89271 00:19:04.541 17:11:28 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:04.541 00:19:04.541 real 12m54.198s 00:19:04.541 user 18m17.326s 00:19:04.541 sys 1m44.143s 00:19:04.541 17:11:28 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.541 ************************************ 00:19:04.541 END TEST bdev_raid 00:19:04.541 ************************************ 00:19:04.541 17:11:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.541 17:11:28 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.541 17:11:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.541 17:11:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.541 17:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:04.541 ************************************ 00:19:04.541 START TEST spdkcli_raid 00:19:04.541 ************************************ 00:19:04.541 17:11:28 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.800 * Looking for test storage... 00:19:04.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.800 17:11:28 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:04.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.800 --rc genhtml_branch_coverage=1 00:19:04.800 --rc genhtml_function_coverage=1 00:19:04.800 --rc genhtml_legend=1 00:19:04.800 --rc geninfo_all_blocks=1 00:19:04.800 --rc geninfo_unexecuted_blocks=1 00:19:04.800 00:19:04.800 ' 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:04.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.800 --rc genhtml_branch_coverage=1 00:19:04.800 --rc genhtml_function_coverage=1 00:19:04.800 --rc genhtml_legend=1 00:19:04.800 --rc geninfo_all_blocks=1 00:19:04.800 --rc geninfo_unexecuted_blocks=1 00:19:04.800 00:19:04.800 ' 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:04.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.800 --rc genhtml_branch_coverage=1 00:19:04.800 --rc genhtml_function_coverage=1 00:19:04.800 --rc genhtml_legend=1 00:19:04.800 --rc geninfo_all_blocks=1 00:19:04.800 --rc geninfo_unexecuted_blocks=1 00:19:04.800 00:19:04.800 ' 00:19:04.800 17:11:28 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:04.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.800 --rc genhtml_branch_coverage=1 00:19:04.800 --rc genhtml_function_coverage=1 00:19:04.800 --rc genhtml_legend=1 00:19:04.800 --rc geninfo_all_blocks=1 00:19:04.800 --rc geninfo_unexecuted_blocks=1 00:19:04.800 00:19:04.800 ' 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:04.800 17:11:28 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:04.800 17:11:28 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89959 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89959 00:19:04.801 17:11:28 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89959 ']' 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.801 17:11:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.060 [2024-11-20 17:11:28.736645] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:05.060 [2024-11-20 17:11:28.736846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89959 ] 00:19:05.060 [2024-11-20 17:11:28.909263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:05.318 [2024-11-20 17:11:29.038482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.318 [2024-11-20 17:11:29.038501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:06.251 17:11:29 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.251 17:11:29 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.251 17:11:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.251 17:11:29 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:06.251 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:06.251 ' 00:19:07.625 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:07.625 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:07.884 17:11:31 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:07.884 17:11:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.884 17:11:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 17:11:31 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:07.884 17:11:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.884 17:11:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 17:11:31 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:07.884 ' 00:19:08.893 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:09.151 17:11:32 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:09.151 17:11:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.151 17:11:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.151 17:11:32 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:09.151 17:11:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.151 17:11:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.151 17:11:32 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:09.151 17:11:32 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:09.717 17:11:33 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:09.717 17:11:33 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:09.717 17:11:33 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:09.717 17:11:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.717 17:11:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.717 17:11:33 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:09.717 17:11:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.717 17:11:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.717 17:11:33 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:09.717 ' 00:19:11.091 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:11.091 17:11:34 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:11.091 17:11:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.091 17:11:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.091 17:11:34 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:11.091 17:11:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.091 17:11:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.091 17:11:34 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:11.091 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:11.091 ' 00:19:12.466 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:12.466 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:12.466 17:11:36 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.466 17:11:36 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89959 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89959 ']' 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89959 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.466 17:11:36 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89959 00:19:12.724 killing process with pid 89959 00:19:12.724 17:11:36 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.724 17:11:36 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.724 17:11:36 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89959' 00:19:12.724 17:11:36 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89959 00:19:12.724 17:11:36 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89959 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89959 ']' 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89959 00:19:14.625 17:11:38 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89959 ']' 00:19:14.625 Process with pid 89959 is not found 00:19:14.625 17:11:38 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89959 00:19:14.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89959) - No such process 00:19:14.625 17:11:38 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89959 is not found' 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:14.625 17:11:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:14.625 00:19:14.625 real 0m10.077s 00:19:14.625 user 0m20.935s 00:19:14.625 sys 0m1.213s 00:19:14.625 17:11:38 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.625 ************************************ 00:19:14.625 END TEST spdkcli_raid 00:19:14.625 ************************************ 00:19:14.625 17:11:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.884 17:11:38 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:14.884 17:11:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.884 17:11:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.884 17:11:38 -- common/autotest_common.sh@10 -- # set +x 00:19:14.884 ************************************ 00:19:14.884 START TEST blockdev_raid5f 00:19:14.884 ************************************ 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:14.884 * Looking for test storage... 00:19:14.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.884 17:11:38 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.884 --rc genhtml_branch_coverage=1 00:19:14.884 --rc genhtml_function_coverage=1 00:19:14.884 --rc genhtml_legend=1 00:19:14.884 --rc geninfo_all_blocks=1 00:19:14.884 --rc geninfo_unexecuted_blocks=1 00:19:14.884 00:19:14.884 ' 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.884 --rc genhtml_branch_coverage=1 00:19:14.884 --rc genhtml_function_coverage=1 00:19:14.884 --rc genhtml_legend=1 00:19:14.884 --rc geninfo_all_blocks=1 00:19:14.884 --rc geninfo_unexecuted_blocks=1 00:19:14.884 00:19:14.884 ' 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.884 --rc genhtml_branch_coverage=1 00:19:14.884 --rc genhtml_function_coverage=1 00:19:14.884 --rc genhtml_legend=1 00:19:14.884 --rc geninfo_all_blocks=1 00:19:14.884 --rc geninfo_unexecuted_blocks=1 00:19:14.884 00:19:14.884 ' 00:19:14.884 17:11:38 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.884 --rc genhtml_branch_coverage=1 00:19:14.884 --rc genhtml_function_coverage=1 00:19:14.884 --rc genhtml_legend=1 00:19:14.884 --rc geninfo_all_blocks=1 00:19:14.884 --rc geninfo_unexecuted_blocks=1 00:19:14.884 00:19:14.884 ' 00:19:14.884 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:14.884 17:11:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:14.884 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:14.884 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:14.884 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:14.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90234 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90234 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90234 ']' 00:19:14.885 17:11:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.885 17:11:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:15.143 [2024-11-20 17:11:38.853001] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:15.143 [2024-11-20 17:11:38.853525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90234 ] 00:19:15.401 [2024-11-20 17:11:39.042705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.401 [2024-11-20 17:11:39.172414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.336 17:11:39 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.336 17:11:39 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:16.336 17:11:39 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:16.336 17:11:39 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:16.336 17:11:39 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:16.336 17:11:39 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.336 17:11:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.336 Malloc0 00:19:16.336 Malloc1 00:19:16.336 Malloc2 00:19:16.336 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.336 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:16.336 17:11:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.336 17:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.336 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.337 17:11:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.337 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa2fa1da-0769-4ccc-998a-58891b48db1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa2fa1da-0769-4ccc-998a-58891b48db1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa2fa1da-0769-4ccc-998a-58891b48db1d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8cd257b8-d6ee-40fa-a68d-3274733bf750",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9f956387-dcdc-4321-8abf-f9fdbe88500c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "12c92167-51a2-4f7e-b1f6-02bb6111e041",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:16.595 17:11:40 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90234 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90234 ']' 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90234 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90234 00:19:16.595 killing process with pid 90234 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90234' 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90234 00:19:16.595 17:11:40 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90234 00:19:19.137 17:11:42 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:19.137 17:11:42 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:19.137 17:11:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:19.137 17:11:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.137 17:11:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 ************************************ 00:19:19.137 START TEST bdev_hello_world 00:19:19.137 ************************************ 00:19:19.137 17:11:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:19.137 [2024-11-20 17:11:42.671589] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:19.137 [2024-11-20 17:11:42.671821] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90290 ] 00:19:19.137 [2024-11-20 17:11:42.859336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.137 [2024-11-20 17:11:42.978224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.704 [2024-11-20 17:11:43.468234] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:19.704 [2024-11-20 17:11:43.468630] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:19.704 [2024-11-20 17:11:43.468671] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:19.704 [2024-11-20 17:11:43.469284] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:19.704 [2024-11-20 17:11:43.469506] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:19.704 [2024-11-20 17:11:43.469564] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:19.704 [2024-11-20 17:11:43.469627] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:19.704 00:19:19.704 [2024-11-20 17:11:43.469665] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:21.080 00:19:21.080 real 0m2.107s 00:19:21.080 user 0m1.664s 00:19:21.080 sys 0m0.323s 00:19:21.080 17:11:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.080 17:11:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:21.080 ************************************ 00:19:21.080 END TEST bdev_hello_world 00:19:21.080 ************************************ 00:19:21.080 17:11:44 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:21.080 17:11:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.080 17:11:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.080 17:11:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.080 ************************************ 00:19:21.080 START TEST bdev_bounds 00:19:21.080 ************************************ 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:21.080 Process bdevio pid: 90332 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90332 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90332' 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90332 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90332 ']' 00:19:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.080 17:11:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:21.080 [2024-11-20 17:11:44.882324] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:21.080 [2024-11-20 17:11:44.882857] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90332 ] 00:19:21.338 [2024-11-20 17:11:45.071117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.596 [2024-11-20 17:11:45.233568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.596 [2024-11-20 17:11:45.233664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.596 [2024-11-20 17:11:45.233681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.162 17:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.162 17:11:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:22.162 17:11:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:22.162 I/O targets: 00:19:22.162 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:22.162 00:19:22.162 00:19:22.162 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.162 http://cunit.sourceforge.net/ 00:19:22.162 00:19:22.162 00:19:22.162 Suite: bdevio tests on: raid5f 00:19:22.162 Test: blockdev write read block ...passed 00:19:22.162 Test: blockdev write zeroes read block ...passed 00:19:22.162 Test: blockdev write zeroes read no split ...passed 00:19:22.421 Test: blockdev write zeroes read split ...passed 00:19:22.421 Test: blockdev write zeroes read split partial ...passed 00:19:22.421 Test: blockdev reset ...passed 00:19:22.421 Test: blockdev write read 8 blocks ...passed 00:19:22.421 Test: blockdev write read size > 128k ...passed 00:19:22.421 Test: blockdev write read invalid size ...passed 00:19:22.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.421 Test: blockdev write read max offset ...passed 00:19:22.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.421 Test: blockdev writev readv 8 blocks ...passed 00:19:22.421 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.421 Test: blockdev writev readv block ...passed 00:19:22.421 Test: blockdev writev readv size > 128k ...passed 00:19:22.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.421 Test: blockdev comparev and writev ...passed 00:19:22.421 Test: blockdev nvme passthru rw ...passed 00:19:22.421 Test: blockdev nvme passthru vendor specific ...passed 00:19:22.421 Test: blockdev nvme admin passthru ...passed 00:19:22.421 Test: blockdev copy ...passed 00:19:22.421 00:19:22.421 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.421 suites 1 1 n/a 0 0 00:19:22.421 tests 23 23 23 0 0 00:19:22.421 asserts 130 130 130 0 n/a 00:19:22.421 00:19:22.421 Elapsed time = 0.535 seconds 00:19:22.421 0 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90332 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90332 ']' 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90332 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90332 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90332' 00:19:22.421 killing process with pid 90332 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90332 00:19:22.421 17:11:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90332 00:19:23.797 17:11:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:23.797 00:19:23.797 real 0m2.812s 00:19:23.797 user 0m6.837s 00:19:23.797 sys 0m0.490s 00:19:23.797 17:11:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.797 17:11:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:23.797 ************************************ 00:19:23.797 END TEST bdev_bounds 00:19:23.797 ************************************ 00:19:23.797 17:11:47 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:23.797 17:11:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:23.797 17:11:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.797 17:11:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.797 ************************************ 00:19:23.797 START TEST bdev_nbd 00:19:23.797 ************************************ 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:23.797 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:23.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90396 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90396 /var/tmp/spdk-nbd.sock 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90396 ']' 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.798 17:11:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:24.056 [2024-11-20 17:11:47.707583] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:24.056 [2024-11-20 17:11:47.707967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.056 [2024-11-20 17:11:47.896393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.314 [2024-11-20 17:11:48.020119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:24.882 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.140 1+0 records in 00:19:25.140 1+0 records out 00:19:25.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363948 s, 11.3 MB/s 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:25.140 17:11:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:25.413 { 00:19:25.413 "nbd_device": "/dev/nbd0", 00:19:25.413 "bdev_name": "raid5f" 00:19:25.413 } 00:19:25.413 ]' 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:25.413 { 00:19:25.413 "nbd_device": "/dev/nbd0", 00:19:25.413 "bdev_name": "raid5f" 00:19:25.413 } 00:19:25.413 ]' 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.413 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:25.677 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.936 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.195 17:11:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:26.453 /dev/nbd0 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.453 1+0 records in 00:19:26.453 1+0 records out 00:19:26.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309078 s, 13.3 MB/s 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.453 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:26.712 { 00:19:26.712 "nbd_device": "/dev/nbd0", 00:19:26.712 "bdev_name": "raid5f" 00:19:26.712 } 00:19:26.712 ]' 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:26.712 { 00:19:26.712 "nbd_device": "/dev/nbd0", 00:19:26.712 "bdev_name": "raid5f" 00:19:26.712 } 00:19:26.712 ]' 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:26.712 256+0 records in 00:19:26.712 256+0 records out 00:19:26.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073522 s, 143 MB/s 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:26.712 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:26.970 256+0 records in 00:19:26.970 256+0 records out 00:19:26.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0400981 s, 26.2 MB/s 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:26.970 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.971 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.229 17:11:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:27.488 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:27.747 malloc_lvol_verify 00:19:27.747 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:28.004 70c9107a-fbc4-42a8-8706-7486bf702b23 00:19:28.004 17:11:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:28.263 ca6f78c6-e7cc-4c99-836e-ddb3f838193f 00:19:28.263 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:28.521 /dev/nbd0 00:19:28.521 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:28.521 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:28.521 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:28.521 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:28.521 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:28.521 mke2fs 1.47.0 (5-Feb-2023) 00:19:28.521 Discarding device blocks: 0/4096 done 00:19:28.521 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:28.521 00:19:28.521 Allocating group tables: 0/1 done 00:19:28.521 Writing inode tables: 0/1 done 00:19:28.521 Creating journal (1024 blocks): done 00:19:28.522 Writing superblocks and filesystem accounting information: 0/1 done 00:19:28.522 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.522 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90396 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90396 ']' 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90396 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90396 00:19:28.781 killing process with pid 90396 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90396' 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90396 00:19:28.781 17:11:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90396 00:19:30.158 ************************************ 00:19:30.158 END TEST bdev_nbd 00:19:30.158 ************************************ 00:19:30.158 17:11:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:30.158 00:19:30.158 real 0m6.294s 00:19:30.158 user 0m8.941s 00:19:30.158 sys 0m1.446s 00:19:30.158 17:11:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.158 17:11:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:30.158 17:11:53 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:30.158 17:11:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:30.158 17:11:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:30.158 17:11:53 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:30.158 17:11:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.158 17:11:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.158 17:11:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:30.158 ************************************ 00:19:30.158 START TEST bdev_fio 00:19:30.158 ************************************ 00:19:30.158 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:30.158 17:11:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:30.417 ************************************ 00:19:30.417 START TEST bdev_fio_rw_verify 00:19:30.417 ************************************ 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:30.417 17:11:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:30.676 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:30.676 fio-3.35 00:19:30.676 Starting 1 thread 00:19:42.898 00:19:42.898 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90606: Wed Nov 20 17:12:05 2024 00:19:42.898 read: IOPS=8656, BW=33.8MiB/s (35.5MB/s)(338MiB/10001msec) 00:19:42.898 slat (usec): min=20, max=170, avg=28.03, stdev= 7.87 00:19:42.898 clat (usec): min=13, max=1130, avg=180.55, stdev=70.35 00:19:42.898 lat (usec): min=39, max=1154, avg=208.57, stdev=71.83 00:19:42.898 clat percentiles (usec): 00:19:42.898 | 50.000th=[ 180], 99.000th=[ 338], 99.900th=[ 383], 99.990th=[ 537], 00:19:42.898 | 99.999th=[ 1123] 00:19:42.898 write: IOPS=9137, BW=35.7MiB/s (37.4MB/s)(353MiB/9880msec); 0 zone resets 00:19:42.898 slat (usec): min=10, max=262, avg=22.69, stdev= 7.57 00:19:42.898 clat (usec): min=70, max=1240, avg=427.91, stdev=68.21 00:19:42.899 lat (usec): min=89, max=1402, avg=450.60, stdev=70.34 00:19:42.899 clat percentiles (usec): 00:19:42.899 | 50.000th=[ 429], 99.000th=[ 603], 99.900th=[ 717], 99.990th=[ 938], 00:19:42.899 | 99.999th=[ 1237] 00:19:42.899 bw ( KiB/s): min=33696, max=41704, per=98.20%, avg=35893.89, stdev=2439.11, samples=19 00:19:42.899 iops : min= 8424, max=10426, avg=8973.47, stdev=609.78, samples=19 00:19:42.899 lat (usec) : 20=0.01%, 50=0.01%, 100=7.85%, 250=32.16%, 500=53.06% 00:19:42.899 lat (usec) : 750=6.89%, 1000=0.03% 00:19:42.899 lat (msec) : 2=0.01% 00:19:42.899 cpu : usr=98.57%, sys=0.61%, ctx=29, majf=0, minf=7534 00:19:42.899 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.899 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.899 issued rwts: total=86570,90277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.899 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:42.899 00:19:42.899 Run status group 0 (all jobs): 00:19:42.899 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=338MiB (355MB), run=10001-10001msec 00:19:42.899 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=353MiB (370MB), run=9880-9880msec 00:19:42.899 ----------------------------------------------------- 00:19:42.899 Suppressions used: 00:19:42.899 count bytes template 00:19:42.899 1 7 /usr/src/fio/parse.c 00:19:42.899 1002 96192 /usr/src/fio/iolog.c 00:19:42.899 1 8 libtcmalloc_minimal.so 00:19:42.899 1 904 libcrypto.so 00:19:42.899 ----------------------------------------------------- 00:19:42.899 00:19:42.899 00:19:42.899 real 0m12.709s 00:19:42.899 user 0m13.090s 00:19:42.899 sys 0m0.850s 00:19:42.899 17:12:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.899 17:12:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:42.899 ************************************ 00:19:42.899 END TEST bdev_fio_rw_verify 00:19:42.899 ************************************ 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa2fa1da-0769-4ccc-998a-58891b48db1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa2fa1da-0769-4ccc-998a-58891b48db1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa2fa1da-0769-4ccc-998a-58891b48db1d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8cd257b8-d6ee-40fa-a68d-3274733bf750",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9f956387-dcdc-4321-8abf-f9fdbe88500c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "12c92167-51a2-4f7e-b1f6-02bb6111e041",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:43.159 /home/vagrant/spdk_repo/spdk 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:43.159 00:19:43.159 real 0m12.922s 00:19:43.159 user 0m13.193s 00:19:43.159 sys 0m0.940s 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.159 17:12:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:43.159 ************************************ 00:19:43.159 END TEST bdev_fio 00:19:43.159 ************************************ 00:19:43.159 17:12:06 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:43.159 17:12:06 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:43.159 17:12:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:43.159 17:12:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.159 17:12:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.159 ************************************ 00:19:43.159 START TEST bdev_verify 00:19:43.159 ************************************ 00:19:43.159 17:12:06 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:43.159 [2024-11-20 17:12:07.015161] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:43.159 [2024-11-20 17:12:07.015357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90761 ] 00:19:43.418 [2024-11-20 17:12:07.197214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:43.677 [2024-11-20 17:12:07.323739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.677 [2024-11-20 17:12:07.323781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.244 Running I/O for 5 seconds... 00:19:46.128 15130.00 IOPS, 59.10 MiB/s [2024-11-20T17:12:10.947Z] 15471.00 IOPS, 60.43 MiB/s [2024-11-20T17:12:11.884Z] 15552.33 IOPS, 60.75 MiB/s [2024-11-20T17:12:13.261Z] 15619.25 IOPS, 61.01 MiB/s [2024-11-20T17:12:13.261Z] 15505.80 IOPS, 60.57 MiB/s 00:19:49.392 Latency(us) 00:19:49.392 [2024-11-20T17:12:13.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.392 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:49.392 Verification LBA range: start 0x0 length 0x2000 00:19:49.392 raid5f : 5.02 7746.63 30.26 0.00 0.00 24806.07 110.31 21686.46 00:19:49.392 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:49.392 Verification LBA range: start 0x2000 length 0x2000 00:19:49.392 raid5f : 5.01 7735.21 30.22 0.00 0.00 24977.25 197.35 22043.93 00:19:49.392 [2024-11-20T17:12:13.261Z] =================================================================================================================== 00:19:49.392 [2024-11-20T17:12:13.261Z] Total : 15481.84 60.48 0.00 0.00 24891.54 110.31 22043.93 00:19:50.330 00:19:50.330 real 0m7.132s 00:19:50.330 user 0m13.098s 00:19:50.330 sys 0m0.324s 00:19:50.330 17:12:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.330 17:12:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:50.330 ************************************ 00:19:50.330 END TEST bdev_verify 00:19:50.330 ************************************ 00:19:50.330 17:12:14 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.330 17:12:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:50.330 17:12:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.330 17:12:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.330 ************************************ 00:19:50.330 START TEST bdev_verify_big_io 00:19:50.330 ************************************ 00:19:50.330 17:12:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:50.589 [2024-11-20 17:12:14.205315] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:50.589 [2024-11-20 17:12:14.205505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90855 ] 00:19:50.589 [2024-11-20 17:12:14.390662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:50.849 [2024-11-20 17:12:14.514651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.849 [2024-11-20 17:12:14.514666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.417 Running I/O for 5 seconds... 00:19:53.291 693.00 IOPS, 43.31 MiB/s [2024-11-20T17:12:18.539Z] 761.00 IOPS, 47.56 MiB/s [2024-11-20T17:12:19.475Z] 761.33 IOPS, 47.58 MiB/s [2024-11-20T17:12:20.422Z] 824.50 IOPS, 51.53 MiB/s [2024-11-20T17:12:20.422Z] 812.40 IOPS, 50.77 MiB/s 00:19:56.553 Latency(us) 00:19:56.553 [2024-11-20T17:12:20.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.553 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:56.553 Verification LBA range: start 0x0 length 0x200 00:19:56.553 raid5f : 5.29 407.44 25.47 0.00 0.00 7832474.32 271.83 343170.33 00:19:56.553 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:56.553 Verification LBA range: start 0x200 length 0x200 00:19:56.553 raid5f : 5.30 407.72 25.48 0.00 0.00 7824108.47 139.64 346983.33 00:19:56.553 [2024-11-20T17:12:20.422Z] =================================================================================================================== 00:19:56.553 [2024-11-20T17:12:20.422Z] Total : 815.17 50.95 0.00 0.00 7828289.46 139.64 346983.33 00:19:57.940 00:19:57.940 real 0m7.425s 00:19:57.940 user 0m13.672s 00:19:57.940 sys 0m0.333s 00:19:57.940 17:12:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.940 17:12:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.940 ************************************ 00:19:57.940 END TEST bdev_verify_big_io 00:19:57.940 ************************************ 00:19:57.940 17:12:21 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.940 17:12:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:57.940 17:12:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.940 17:12:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.940 ************************************ 00:19:57.940 START TEST bdev_write_zeroes 00:19:57.940 ************************************ 00:19:57.940 17:12:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:57.940 [2024-11-20 17:12:21.688237] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:19:57.940 [2024-11-20 17:12:21.688424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90953 ] 00:19:58.199 [2024-11-20 17:12:21.870935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.199 [2024-11-20 17:12:21.980945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.766 Running I/O for 1 seconds... 00:19:59.703 22551.00 IOPS, 88.09 MiB/s 00:19:59.703 Latency(us) 00:19:59.703 [2024-11-20T17:12:23.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.703 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:59.703 raid5f : 1.01 22516.63 87.96 0.00 0.00 5663.55 1884.16 7745.16 00:19:59.703 [2024-11-20T17:12:23.572Z] =================================================================================================================== 00:19:59.703 [2024-11-20T17:12:23.572Z] Total : 22516.63 87.96 0.00 0.00 5663.55 1884.16 7745.16 00:20:01.079 00:20:01.079 real 0m3.081s 00:20:01.079 user 0m2.655s 00:20:01.079 sys 0m0.298s 00:20:01.079 17:12:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.079 ************************************ 00:20:01.079 END TEST bdev_write_zeroes 00:20:01.079 ************************************ 00:20:01.079 17:12:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:01.079 17:12:24 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.079 17:12:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:01.079 17:12:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.079 17:12:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:01.079 ************************************ 00:20:01.079 START TEST bdev_json_nonenclosed 00:20:01.079 ************************************ 00:20:01.079 17:12:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.079 [2024-11-20 17:12:24.815269] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:20:01.079 [2024-11-20 17:12:24.815484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91007 ] 00:20:01.338 [2024-11-20 17:12:24.999758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.338 [2024-11-20 17:12:25.123914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.338 [2024-11-20 17:12:25.124030] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:01.338 [2024-11-20 17:12:25.124070] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:01.338 [2024-11-20 17:12:25.124085] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:01.596 00:20:01.597 real 0m0.647s 00:20:01.597 user 0m0.398s 00:20:01.597 sys 0m0.145s 00:20:01.597 17:12:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.597 17:12:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:01.597 ************************************ 00:20:01.597 END TEST bdev_json_nonenclosed 00:20:01.597 ************************************ 00:20:01.597 17:12:25 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.597 17:12:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:01.597 17:12:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.597 17:12:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:01.597 ************************************ 00:20:01.597 START TEST bdev_json_nonarray 00:20:01.597 ************************************ 00:20:01.597 17:12:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.856 [2024-11-20 17:12:25.512850] Starting SPDK v25.01-pre git sha1 25916e30c / DPDK 24.03.0 initialization... 00:20:01.856 [2024-11-20 17:12:25.513036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91031 ] 00:20:01.856 [2024-11-20 17:12:25.692092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.115 [2024-11-20 17:12:25.811637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.115 [2024-11-20 17:12:25.811827] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:02.115 [2024-11-20 17:12:25.811858] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.115 [2024-11-20 17:12:25.811882] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.374 00:20:02.374 real 0m0.638s 00:20:02.374 user 0m0.399s 00:20:02.374 sys 0m0.134s 00:20:02.374 17:12:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.374 ************************************ 00:20:02.374 END TEST bdev_json_nonarray 00:20:02.374 ************************************ 00:20:02.374 17:12:26 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:02.374 17:12:26 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:02.374 00:20:02.374 real 0m47.567s 00:20:02.374 user 1m5.053s 00:20:02.374 sys 0m5.414s 00:20:02.374 17:12:26 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.374 17:12:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.374 ************************************ 00:20:02.374 END TEST blockdev_raid5f 00:20:02.374 ************************************ 00:20:02.374 17:12:26 -- spdk/autotest.sh@194 -- # uname -s 00:20:02.374 17:12:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:02.374 17:12:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:02.374 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:02.374 17:12:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:02.374 17:12:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:02.374 17:12:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:02.374 17:12:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:02.374 17:12:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.374 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:02.374 17:12:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:02.374 17:12:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:02.374 17:12:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:02.374 17:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:04.278 INFO: APP EXITING 00:20:04.278 INFO: killing all VMs 00:20:04.278 INFO: killing vhost app 00:20:04.278 INFO: EXIT DONE 00:20:04.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:04.278 Waiting for block devices as requested 00:20:04.278 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:04.536 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:05.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.103 Cleaning 00:20:05.103 Removing: /var/run/dpdk/spdk0/config 00:20:05.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:05.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:05.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:05.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:05.103 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:05.103 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:05.103 Removing: /dev/shm/spdk_tgt_trace.pid56767 00:20:05.362 Removing: /var/run/dpdk/spdk0 00:20:05.362 Removing: /var/run/dpdk/spdk_pid56538 00:20:05.362 Removing: /var/run/dpdk/spdk_pid56767 00:20:05.362 Removing: /var/run/dpdk/spdk_pid56996 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57099 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57145 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57279 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57297 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57496 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57601 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57708 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57830 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57938 00:20:05.362 Removing: /var/run/dpdk/spdk_pid57972 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58014 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58090 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58196 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58660 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58735 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58803 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58819 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58964 00:20:05.362 Removing: /var/run/dpdk/spdk_pid58980 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59126 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59147 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59211 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59229 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59293 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59311 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59506 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59543 00:20:05.362 Removing: /var/run/dpdk/spdk_pid59632 00:20:05.362 Removing: /var/run/dpdk/spdk_pid60977 00:20:05.362 Removing: /var/run/dpdk/spdk_pid61183 00:20:05.362 Removing: /var/run/dpdk/spdk_pid61334 00:20:05.362 Removing: /var/run/dpdk/spdk_pid61983 00:20:05.362 Removing: /var/run/dpdk/spdk_pid62200 00:20:05.362 Removing: /var/run/dpdk/spdk_pid62340 00:20:05.362 Removing: /var/run/dpdk/spdk_pid62993 00:20:05.362 Removing: /var/run/dpdk/spdk_pid63330 00:20:05.362 Removing: /var/run/dpdk/spdk_pid63470 00:20:05.362 Removing: /var/run/dpdk/spdk_pid64883 00:20:05.362 Removing: /var/run/dpdk/spdk_pid65147 00:20:05.362 Removing: /var/run/dpdk/spdk_pid65293 00:20:05.362 Removing: /var/run/dpdk/spdk_pid66706 00:20:05.362 Removing: /var/run/dpdk/spdk_pid66967 00:20:05.362 Removing: /var/run/dpdk/spdk_pid67113 00:20:05.362 Removing: /var/run/dpdk/spdk_pid68534 00:20:05.362 Removing: /var/run/dpdk/spdk_pid68985 00:20:05.362 Removing: /var/run/dpdk/spdk_pid69131 00:20:05.362 Removing: /var/run/dpdk/spdk_pid70644 00:20:05.362 Removing: /var/run/dpdk/spdk_pid70914 00:20:05.362 Removing: /var/run/dpdk/spdk_pid71060 00:20:05.362 Removing: /var/run/dpdk/spdk_pid72578 00:20:05.362 Removing: /var/run/dpdk/spdk_pid72848 00:20:05.362 Removing: /var/run/dpdk/spdk_pid72994 00:20:05.362 Removing: /var/run/dpdk/spdk_pid74497 00:20:05.362 Removing: /var/run/dpdk/spdk_pid74997 00:20:05.362 Removing: /var/run/dpdk/spdk_pid75137 00:20:05.362 Removing: /var/run/dpdk/spdk_pid75286 00:20:05.362 Removing: /var/run/dpdk/spdk_pid75732 00:20:05.362 Removing: /var/run/dpdk/spdk_pid76484 00:20:05.362 Removing: /var/run/dpdk/spdk_pid76886 00:20:05.362 Removing: /var/run/dpdk/spdk_pid77587 00:20:05.362 Removing: /var/run/dpdk/spdk_pid78061 00:20:05.362 Removing: /var/run/dpdk/spdk_pid78849 00:20:05.362 Removing: /var/run/dpdk/spdk_pid79269 00:20:05.362 Removing: /var/run/dpdk/spdk_pid81267 00:20:05.362 Removing: /var/run/dpdk/spdk_pid81711 00:20:05.362 Removing: /var/run/dpdk/spdk_pid82163 00:20:05.362 Removing: /var/run/dpdk/spdk_pid84293 00:20:05.362 Removing: /var/run/dpdk/spdk_pid84784 00:20:05.362 Removing: /var/run/dpdk/spdk_pid85293 00:20:05.362 Removing: /var/run/dpdk/spdk_pid86374 00:20:05.362 Removing: /var/run/dpdk/spdk_pid86702 00:20:05.362 Removing: /var/run/dpdk/spdk_pid87658 00:20:05.362 Removing: /var/run/dpdk/spdk_pid87987 00:20:05.362 Removing: /var/run/dpdk/spdk_pid88937 00:20:05.362 Removing: /var/run/dpdk/spdk_pid89271 00:20:05.362 Removing: /var/run/dpdk/spdk_pid89959 00:20:05.362 Removing: /var/run/dpdk/spdk_pid90234 00:20:05.362 Removing: /var/run/dpdk/spdk_pid90290 00:20:05.362 Removing: /var/run/dpdk/spdk_pid90332 00:20:05.362 Removing: /var/run/dpdk/spdk_pid90592 00:20:05.362 Removing: /var/run/dpdk/spdk_pid90761 00:20:05.620 Removing: /var/run/dpdk/spdk_pid90855 00:20:05.620 Removing: /var/run/dpdk/spdk_pid90953 00:20:05.620 Removing: /var/run/dpdk/spdk_pid91007 00:20:05.620 Removing: /var/run/dpdk/spdk_pid91031 00:20:05.620 Clean 00:20:05.620 17:12:29 -- common/autotest_common.sh@1453 -- # return 0 00:20:05.620 17:12:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:05.620 17:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.620 17:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.620 17:12:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:05.620 17:12:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.620 17:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.620 17:12:29 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:05.620 17:12:29 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:05.620 17:12:29 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:05.620 17:12:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:05.620 17:12:29 -- spdk/autotest.sh@398 -- # hostname 00:20:05.620 17:12:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:05.878 geninfo: WARNING: invalid characters removed from testname! 00:20:32.412 17:12:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:32.670 17:12:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:35.219 17:12:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:37.820 17:13:01 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.352 17:13:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:42.884 17:13:06 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:46.170 17:13:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:46.170 17:13:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:46.170 17:13:09 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:46.170 17:13:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:46.170 17:13:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:46.170 17:13:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:46.170 + [[ -n 5204 ]] 00:20:46.170 + sudo kill 5204 00:20:46.178 [Pipeline] } 00:20:46.192 [Pipeline] // timeout 00:20:46.196 [Pipeline] } 00:20:46.209 [Pipeline] // stage 00:20:46.214 [Pipeline] } 00:20:46.227 [Pipeline] // catchError 00:20:46.235 [Pipeline] stage 00:20:46.237 [Pipeline] { (Stop VM) 00:20:46.248 [Pipeline] sh 00:20:46.552 + vagrant halt 00:20:49.909 ==> default: Halting domain... 00:20:55.185 [Pipeline] sh 00:20:55.459 + vagrant destroy -f 00:20:58.740 ==> default: Removing domain... 00:20:59.011 [Pipeline] sh 00:20:59.292 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:59.303 [Pipeline] } 00:20:59.318 [Pipeline] // stage 00:20:59.323 [Pipeline] } 00:20:59.336 [Pipeline] // dir 00:20:59.341 [Pipeline] } 00:20:59.355 [Pipeline] // wrap 00:20:59.361 [Pipeline] } 00:20:59.373 [Pipeline] // catchError 00:20:59.383 [Pipeline] stage 00:20:59.385 [Pipeline] { (Epilogue) 00:20:59.398 [Pipeline] sh 00:20:59.679 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:06.286 [Pipeline] catchError 00:21:06.287 [Pipeline] { 00:21:06.296 [Pipeline] sh 00:21:06.576 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:06.576 Artifacts sizes are good 00:21:06.585 [Pipeline] } 00:21:06.602 [Pipeline] // catchError 00:21:06.614 [Pipeline] archiveArtifacts 00:21:06.621 Archiving artifacts 00:21:06.725 [Pipeline] cleanWs 00:21:06.739 [WS-CLEANUP] Deleting project workspace... 00:21:06.739 [WS-CLEANUP] Deferred wipeout is used... 00:21:06.746 [WS-CLEANUP] done 00:21:06.748 [Pipeline] } 00:21:06.765 [Pipeline] // stage 00:21:06.772 [Pipeline] } 00:21:06.789 [Pipeline] // node 00:21:06.795 [Pipeline] End of Pipeline 00:21:06.847 Finished: SUCCESS